modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-12 18:26:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 422
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-12 18:26:09
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
not-lain/Gemma-2b-Peft-finetuning | not-lain | "2024-03-22T05:08:50Z" | 12 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"license:other",
"region:us"
] | null | "2024-03-22T05:01:03Z" | ---
license: other
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: google/gemma-2b
model-index:
- name: outputs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
miao1234/furniture_use_data_finetuning | miao1234 | "2023-10-30T10:35:06Z" | 33 | 0 | transformers | [
"transformers",
"pytorch",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | "2023-10-29T19:33:33Z" | ---
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
model-index:
- name: furniture_use_data_finetuning
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# furniture_use_data_finetuning
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
ArchiveAI/Thespis-Balanced-7b-v1 | ArchiveAI | "2024-03-15T06:38:20Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-15T06:38:20Z" | ---
license: cc-by-nc-4.0
---
ITS PRETTY COOL! If you need a readme go look at one of the other models I've posted. Prompt format is the same. I'll add something better after I've slept. |
darkc0de/BuddyGlassUncensored2025.4 | darkc0de | "2025-03-02T15:41:30Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"base_model:TheDrummer/Cydonia-24B-v2",
"base_model:merge:TheDrummer/Cydonia-24B-v2",
"base_model:cognitivecomputations/Dolphin3.0-Mistral-24B",
"base_model:merge:cognitivecomputations/Dolphin3.0-Mistral-24B",
"base_model:huihui-ai/Arcee-Blitz-abliterated",
"base_model:merge:huihui-ai/Arcee-Blitz-abliterated",
"base_model:huihui-ai/Mistral-Small-24B-Instruct-2501-abliterated",
"base_model:merge:huihui-ai/Mistral-Small-24B-Instruct-2501-abliterated",
"base_model:mistralai/Mistral-Small-24B-Instruct-2501",
"base_model:merge:mistralai/Mistral-Small-24B-Instruct-2501",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-02T15:24:39Z" | ---
base_model:
- mistralai/Mistral-Small-24B-Instruct-2501
- huihui-ai/Mistral-Small-24B-Instruct-2501-abliterated
- TheDrummer/Cydonia-24B-v2
- huihui-ai/Arcee-Blitz-abliterated
- cognitivecomputations/Dolphin3.0-Mistral-24B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [mistralai/Mistral-Small-24B-Instruct-2501](https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501) as a base.
### Models Merged
The following models were included in the merge:
* [huihui-ai/Mistral-Small-24B-Instruct-2501-abliterated](https://huggingface.co/huihui-ai/Mistral-Small-24B-Instruct-2501-abliterated)
* [TheDrummer/Cydonia-24B-v2](https://huggingface.co/TheDrummer/Cydonia-24B-v2)
* [huihui-ai/Arcee-Blitz-abliterated](https://huggingface.co/huihui-ai/Arcee-Blitz-abliterated)
* [cognitivecomputations/Dolphin3.0-Mistral-24B](https://huggingface.co/cognitivecomputations/Dolphin3.0-Mistral-24B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: cognitivecomputations/Dolphin3.0-Mistral-24B
parameters:
density: 0.5
weight: 0.5
- model: huihui-ai/Mistral-Small-24B-Instruct-2501-abliterated
parameters:
density: 0.5
weight: 0.5
- model: TheDrummer/Cydonia-24B-v2
parameters:
density: 0.5
weight: 0.5
- model: huihui-ai/Arcee-Blitz-abliterated
parameters:
density: 0.5
weight: 0.5
merge_method: dare_ties
base_model: mistralai/Mistral-Small-24B-Instruct-2501
parameters:
normalize: false
int8_mask: true
dtype: float16
```
|
ibrocalculus/example_model2 | ibrocalculus | "2025-02-25T09:02:07Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-02-25T08:47:46Z" | # Example Model2
###### This is a second sample model I created for practice purpose
---
license: mit
---
|
devcharmander/toastmaster | devcharmander | "2023-12-01T11:10:54Z" | 0 | 0 | null | [
"coreml",
"region:us"
] | null | "2023-12-01T10:51:39Z" | ## Whisper model files in custom ggml format
The [original Whisper PyTorch models provided by OpenAI](https://github.com/openai/whisper/blob/main/whisper/__init__.py#L17-L27)
are converted to custom `ggml` format in order to be able to load them in C/C++.
Conversion is performed using the [convert-pt-to-ggml.py](convert-pt-to-ggml.py) script.
You can either obtain the original models and generate the `ggml` files yourself using the conversion script,
or you can use the [download-ggml-model.sh](download-ggml-model.sh) script to download the already converted models.
Currently, they are hosted on the following locations:
- https://huggingface.co/ggerganov/whisper.cpp
- https://ggml.ggerganov.com
Sample download:
```java
$ ./download-ggml-model.sh base.en
Downloading ggml model base.en ...
models/ggml-base.en.bin 100%[=============================================>] 141.11M 5.41MB/s in 22s
Done! Model 'base.en' saved in 'models/ggml-base.en.bin'
You can now use it like this:
$ ./main -m models/ggml-base.en.bin -f samples/jfk.wav
```
To convert the files yourself, use the convert-pt-to-ggml.py script. Here is an example usage.
The original PyTorch files are assumed to have been downloaded into ~/.cache/whisper
Change `~/path/to/repo/whisper/` to the location for your copy of the Whisper source:
```
mkdir models/whisper-medium
python models/convert-pt-to-ggml.py ~/.cache/whisper/medium.pt ~/path/to/repo/whisper/ ./models/whisper-medium
mv ./models/whisper-medium/ggml-model.bin models/ggml-medium.bin
rmdir models/whisper-medium
```
A third option to obtain the model files is to download them from Hugging Face:
https://huggingface.co/ggerganov/whisper.cpp/tree/main
## Available models
| Model | Disk | SHA |
| --- | --- | --- |
| tiny | 75 MiB | `bd577a113a864445d4c299885e0cb97d4ba92b5f` |
| tiny.en | 75 MiB | `c78c86eb1a8faa21b369bcd33207cc90d64ae9df` |
| base | 142 MiB | `465707469ff3a37a2b9b8d8f89f2f99de7299dac` |
| base.en | 142 MiB | `137c40403d78fd54d454da0f9bd998f78703390c` |
| small | 466 MiB | `55356645c2b361a969dfd0ef2c5a50d530afd8d5` |
| small.en | 466 MiB | `db8a495a91d927739e50b3fc1cc4c6b8f6c2d022` |
| medium | 1.5 GiB | `fd9727b6e1217c2f614f9b698455c4ffd82463b4` |
| medium.en | 1.5 GiB | `8c30f0e44ce9560643ebd10bbe50cd20eafd3723` |
| large-v1 | 2.9 GiB | `b1caaf735c4cc1429223d5a74f0f4d0b9b59a299` |
| large-v2 | 2.9 GiB | `0f4c8e34f21cf1a914c59d8b3ce882345ad349d6` |
| large-v3 | 2.9 GiB | `ad82bf6a9043ceed055076d0fd39f5f186ff8062` |
## Model files for testing purposes
The model files prefixed with `for-tests-` are empty (i.e. do not contain any weights) and are used by the CI for
testing purposes. They are directly included in this repository for convenience and the Github Actions CI uses them to
run various sanitizer tests.
## Fine-tuned models
There are community efforts for creating fine-tuned Whisper models using extra training data. For example, this
[blog post](https://huggingface.co/blog/fine-tune-whisper) describes a method for fine-tuning using Hugging Face (HF)
Transformer implementation of Whisper. The produced models are in slightly different format compared to the original
OpenAI format. To read the HF models you can use the [convert-h5-to-ggml.py](convert-h5-to-ggml.py) script like this:
```bash
git clone https://github.com/openai/whisper
git clone https://github.com/ggerganov/whisper.cpp
# clone HF fine-tuned model (this is just an example)
git clone https://huggingface.co/openai/whisper-medium
# convert the model to ggml
python3 ./whisper.cpp/models/convert-h5-to-ggml.py ./whisper-medium/ ./whisper .
```
## Distilled models
Initial support for https://huggingface.co/distil-whisper is available.
Currently, the chunk-based transcription strategy is not implemented, so there can be sub-optimal quality when using the distilled models with `whisper.cpp`.
```bash
# clone OpenAI whisper and whisper.cpp
git clone https://github.com/openai/whisper
git clone https://github.com/ggerganov/whisper.cpp
# get the models
cd whisper.cpp/models
git clone https://huggingface.co/distil-whisper/distil-medium.en
git clone https://huggingface.co/distil-whisper/distil-large-v2
# convert to ggml
python3 ./convert-h5-to-ggml.py ./distil-medium.en/ ../../whisper .
mv ggml-model.bin ggml-medium.en-distil.bin
python3 ./convert-h5-to-ggml.py ./distil-large-v2/ ../../whisper .
mv ggml-model.bin ggml-large-v2-distil.bin
```
|
sd-concepts-library/jojo-bizzare-adventure-manga-lineart | sd-concepts-library | "2022-09-21T15:03:39Z" | 0 | 1 | null | [
"license:mit",
"region:us"
] | null | "2022-09-21T15:03:33Z" | ---
license: mit
---
### JoJo Bizzare Adventure manga lineart on Stable Diffusion
This is the `<JoJo_lineart>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:















|
havinash-ai/148d2316-ea3d-4ae8-b42e-b2f01ebe44e2 | havinash-ai | "2025-01-14T12:44:47Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-7b-it",
"base_model:adapter:unsloth/gemma-7b-it",
"license:apache-2.0",
"region:us"
] | null | "2025-01-14T12:43:14Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/gemma-7b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 148d2316-ea3d-4ae8-b42e-b2f01ebe44e2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-7b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8bab2020b11caa57_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8bab2020b11caa57_train_data.json
type:
field_input: text
field_instruction: query
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: havinash-ai/148d2316-ea3d-4ae8-b42e-b2f01ebe44e2
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/8bab2020b11caa57_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fd546f03-61db-499f-a81c-027d8a071d30
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: fd546f03-61db-499f-a81c-027d8a071d30
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 148d2316-ea3d-4ae8-b42e-b2f01ebe44e2
This model is a fine-tuned version of [unsloth/gemma-7b-it](https://huggingface.co/unsloth/gemma-7b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8679
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.215 | 0.0018 | 1 | 2.0531 |
| 1.7352 | 0.0053 | 3 | 1.9725 |
| 1.4294 | 0.0105 | 6 | 1.2547 |
| 0.8574 | 0.0158 | 9 | 0.8679 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
pankajrajdeo/Bioformer-8L-UMLS-Pubmed_PMC-ST-TCE-Epoch-1 | pankajrajdeo | "2025-02-02T04:49:59Z" | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:6150902",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2025-02-02T04:49:47Z" | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:6150902
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: '[YEAR_RANGE] 2021-2025 [TEXT] Semantic Stroop interference is
modulated by the availability of executive resources: Insights from delta-plot
analyses and cognitive load manipulation'
sentences:
- '[YEAR_RANGE] 2021-2025 [TEXT] We investigated whether, during visual word recognition,
semantic processing is modulated by attentional control mechanisms directed at
matching semantic information with task-relevant goals. In previous research,
we analyzed the semantic Stroop interference as a function of response latency
(delta-plot analyses) and found that this phenomenon mainly occurs in the slowest
responses. Here, we investigated whether this pattern is due to reduced ability
to proactively maintain the task goal in these slowest trials. In two pairs of
experiments, participants completed two semantic Stroop tasks: a classic semantic
Stroop task (Experiment 1A and 2A) and a semantic Stroop task combined with an
n-back task (Experiment 1B and 2B). The two pairs of experiments only differed
in the trial pace, which was slightly faster in Experiments 2A and 2B than in
Experiments 1A and 1B. By taxing the executive control system, the n-back task
was expected to hinder proactive control. Delta-plot analyses of the semantic
Stroop task replicated the enhanced effect in the slowest responses, but only
under sufficient time pressure. Combining the semantic Stroop task with the n-back
task produced a change in the distributional profile of semantic Stroop interference,
which we ascribe to a general difficulty in the use of proactive control. Our
findings suggest that semantic Stroop interference is, to some extent, dependent
on the available executive resources, while also being sensitive to subtle variations
in task conditions.Supplementary InformationThe online version contains supplementary
material available at 10.3758/s13421-024-01552-5.'
- '[YEAR_RANGE] 2021-2025 [TEXT] Priority question exercises are increasingly used
to frame and set future research, innovation and development agendas. They can
provide an important bridge between the discoveries, data and outputs generated
by researchers, and the information required by policy makers and funders. Microbial
biofilms present huge scientific, societal and economic opportunities and challenges.
In order to identify key priorities that will help to advance the field, here
we review questions from a pool submitted by the international biofilm research
community and from practitioners working across industry, the environment and
medicine. To avoid bias we used computational approaches to group questions and
manage a voting and selection process. The outcome of the exercise is a set of
78 unique questions, categorized in six themes: (i) Biofilm control, disruption,
prevention, management, treatment (13 questions); (ii) Resistance, persistence,
tolerance, role of aggregation, immune interaction, relevance to infection (10
questions); (iii) Model systems, standards, regulatory, policy education, interdisciplinary
approaches (15 questions); (iv) Polymicrobial, interactions, ecology, microbiome,
phage (13 questions); (v) Clinical focus, chronic infection, detection, diagnostics
(13 questions); and (vi) Matrix, lipids, capsule, metabolism, development, physiology,
ecology, evolution environment, microbiome, community engineering (14 questions).
The questions presented are intended to highlight opportunities, stimulate discussion
and provide focus for researchers, funders and policy makers, informing future
research, innovation and development strategy for biofilms and microbial communities.'
- '[YEAR_RANGE] 2021-2025 [TEXT] Polymer compounds have become a popular choice
for the synthesis of novel products and are being used in cementitious mixtures
principally for altering the properties in the fresh state and as repair materials.
These polymers are used in various combinations. Their interaction with cement
is worth studying because its hydration, followed by setting and hardening, is
the primary phenomenon contributing to the strength gain and performance of concrete.
This paper summarizes the effects of different polymers on the hydration of cement
and the properties of concrete/mortar. Studies have established that the incorporation
of polymers as a workability enhancing admixture or for improving strength, durability,
and other properties severely affects the early hydration of cement and reduces
the overall strength gain in most cases. The hydration retarding effect depends
on the charge, architecture, and the amount (wt %) of polymer added. However,
owing to the densification of the interfacial transition zone and formation of
polymer films/bridges between stacks of calcium hydroxide surfaces and air, the
later age properties show beneficial effects such as higher flexural strength,
enhanced compressive strength, and modulus of elasticity, better resistance against
frost, and corrosion of steel reinforcement. Further, it is seen that the hydration
retardation may be mitigated to some extent by the addition of silica fume or
zeolite; using a defoaming agent; curing at high temperatures; and following a
combination of wet, moist, and dry curing regimes. This review is expected to
be helpful to all practicing civil engineers who are the immediate users of these
chemicals and are working to achieve quality concrete construction.'
- source_sentence: '[YEAR_RANGE] 2021-2025 [TEXT] The basic biology of NK cells and
its application in tumor immunotherapy'
sentences:
- '[YEAR_RANGE] 2021-2025 [TEXT] Natural Killer (NK) cells play a crucial role as
effector cells within the tumor immune microenvironment, capable of identifying
and eliminating tumor cells through the expression of diverse activating and inhibitory
receptors that recognize tumor-related ligands. Therefore, harnessing NK cells
for therapeutic purposes represents a significant adjunct to T cell-based tumor
immunotherapy strategies. Presently, NK cell-based tumor immunotherapy strategies
encompass various approaches, including adoptive NK cell therapy, cytokine therapy,
antibody-based NK cell therapy (enhancing ADCC mediated by NK cells, NK cell engagers,
immune checkpoint blockade therapy) and the utilization of nanoparticles and small
molecules to modulate NK cell anti-tumor functionality. This article presents
a comprehensive overview of the latest advances in NK cell-based anti-tumor immunotherapy,
with the aim of offering insights and methodologies for the clinical treatment
of cancer patients.'
- '[YEAR_RANGE] 2021-2025 [TEXT] Background and study aims The optimal number of
needle passes during endoscopic ultrasound-guided fine-needle biopsy (EUS-FNB)
is not yet established. We aimed to perform a per-pass analysis of the diagnostic
accuracy of EUS-FNB of solid pancreatic lesions using a 22G Franseen needle. Patients
and methods Consecutive patients with solid pancreatic lesions referred to 11
Italian centers were prospectively enrolled. Three needle passes were performed;
specimens were collected after each pass and processed individually as standard
histology following macroscopic on-site evaluation (MOSE) by the endoscopist.
The primary endpoint was diagnostic accuracy of each sequential pass. Final diagnosis
was established based on surgical pathology or a clinical course of at least 6
months. Secondary endpoints were specimen adequacy, MOSE reliability, factors
impacting diagnostic accuracy, and procedure-related adverse events. Results A
total of 504 samples from 168 patients were evaluated. Diagnostic accuracy was
90.5% (85.0%–94.1%) after one pass and 97.6% (94.1%–99.3%) after two passes (
P =0.01). Similarly, diagnostic sensitivity and sample adequacy were significantly
higher adding the second needle pass (90.2%, 84.6%–94.3% vs 97.5%, 93.8%–99.3%,
P =0.009 and 91.1%, 85.7%-94.9% vs 98.2%, 95.8%–99.3%, P =0.009, one pass vs two
passes, respectively). Accuracy, sensitivity, and adequacy remained the same after
the third pass. The concordance between MOSE and histological evaluation was 89.9%.
The number of passes was the only factor associated with accuracy. One case of
mild acute pancreatitis (0.6%) was managed conservatively. Conclusions At least
two passes should be performed for the diagnosis of solid pancreatic lesions.
MOSE is a reliable tool to predict the histological adequacy of specimens.'
- '[YEAR_RANGE] 2021-2025 [TEXT] After over a hundred years of research, the question
whether the symptoms of schizophrenia are rather trait-like (being a relatively
stable quality of individuals) or state-like (being substance to change) is still
unanswered. To assess the trait and the state component in patients with acute
schizophrenia, one group receiving antipsychotic treatment, the other not. Data
from four phase II/III, 6-week, randomized, double-blind, placebo-controlled trials
of similar design that included patients with acute exacerbation of schizophrenia
were pooled. In every trial, one treatment group received a third-generation antipsychotic,
cariprazine, and the other group placebo. To assess symptoms of schizophrenia,
the Positive and Negative Symptom Scale (PANSS) was applied. Further analyses
were conducted using the five subscales as proposed by Wallwork and colleagues.
A latent state–trait (LST) model was developed to estimate the trait and state
components of the total variance of the observed scores. All symptom dimensions
behaved more in a trait-like manner. The proportions of all sources of variability
changed over the course of the observational period, with a bent around weeks
3 and 4. Visually inspected, no major differences were found between the two treatment
groups regarding the LST structure of symptom dimensions. This high proportion
of inter-individual stability may represent an inherent part of symptomatology
that behaves independently from treatment status.Supplementary InformationThe
online version contains supplementary material available at 10.1007/s00406-024-01790-3.'
- source_sentence: '[YEAR_RANGE] 2021-2025 [TEXT] Robotic-assisted minimally invasive
repair surgery for progressive spondylolysis in a young athlete: a technical note'
sentences:
- '[YEAR_RANGE] 2021-2025 [TEXT] AbstractCXCL12 acts as a chemoattractant by binding
to the receptor CXCR4. The (atypical) chemokine receptor ACKR3 (CXCR7) scavenges
CXCL12. Antagonism of ACKR3 thus leads to an increase in CXCL12 concentrations
that has been used as a pharmacodynamic biomarker in healthy adults. Increased
CXCL12 concentrations have also been linked to repair mechanisms in human diseases
and mouse models. To date, CXCL12 concentrations have typically been quantified
using antibody‐based assays with overlapping or unclear specificity for the various
CXCL12 isoforms (α, β, and γ) and proteoforms. Only the N‐terminal full‐length
CXCL12 proteoform is biologically active and can engage CXCR4 and ACKR3, but this
proteoform could so far not be quantified in healthy adults. Here, we describe
a new and fit‐for‐purpose validated immunoaffinity mass spectrometry (IA‐MS) assay
for specific measurement of five CXCL12α proteoforms in human plasma, including
the biologically active CXCL12α proteoform. This biomarker assay was used in a
phase I clinical study with the ACKR3 antagonist ACT‐1004‐1239. In placebo‐treated
healthy adults, 1.0 nM total CXCL12α and 0.1 nM biologically active CXCL12α was
quantified. The concentrations of both proteoforms increased up to two‐fold in
healthy adults compared to placebo following drug administration. At all dose
levels, 10% of the CXCL12α was the biologically active proteoform and the simultaneous
increase of all proteoforms suggests that a new steady state has been reached
24 h following dosing. Hence, this IA‐MS biomarker assay can be used to specifically
measure active CXCL12 proteoform concentrations in clinical trials to demonstrate
target engagement and correlate with clinical outcomes.'
- '[YEAR_RANGE] 2021-2025 [TEXT] Background and objectivePatients suspected to have
lung cancer, undergo endobronchial ultrasound bronchoscopy (EBUS) for the purpose
of diagnosis and staging. For presumptive curable patients, the EBUS bronchoscopy
is planned based on images and data from computed tomography (CT) images and positron
emission tomography (PET). Our study aimed to evaluate the feasibility of a multimodal
electromagnetic navigation platform for EBUS bronchoscopy, integrating ultrasound
and segmented CT, and PET scan imaging data.MethodsThe proof-of-concept study
included patients with suspected lung cancer and pathological mediastinal/hilar
lymph nodes identified on both CT and PET scans. Images obtained from these two
modalities were segmented to delineate target lymph nodes and then incorporated
into the CustusX navigation platform. The EBUS bronchoscope was equipped with
a sensor, calibrated, and affixed to a 3D printed click-on device positioned at
the bronchoscope’s tip. Navigation accuracy was measured postoperatively using
ultrasound recordings.ResultsThe study enrolled three patients, all presenting
with suspected mediastinal lymph node metastasis (N1-3). All PET-positive lymph
nodes were displayed in the navigation platform during the EBUS procedures. In
total, five distinct lymph nodes were sampled, yielding malignant cells from three
nodes and lymphocytes from the remaining two. The median accuracy of the navigation
system was 7.7 mm.ConclusionOur study introduces a feasible multimodal electromagnetic
navigation platform that combines intraoperative ultrasound with preoperative
segmented CT and PET imaging data for EBUS lymph node staging examinations. This
innovative approach holds promise for enhancing the accuracy and effectiveness
of EBUS procedures.'
- '[YEAR_RANGE] 2021-2025 [TEXT] AbstractPresently, the invasiveness of direct repair
surgery for lumbar spondylolysis is relatively high. Thus, high school and junior
high school students who play sports often cannot return to sports before graduation
because of the invasiveness. The use of a robotic system enabled an accurate and
minimally invasive procedure. Robotic-assisted minimally invasive direct pars
repair surgery is useful for young patients with progressive spondylolysis.'
- source_sentence: '[YEAR_RANGE] 2021-2025 [TEXT] An artificial intelligence-based
nerve recognition model is useful as surgical support technology and as an educational
tool in laparoscopic and robot-assisted rectal cancer surgery'
sentences:
- '[YEAR_RANGE] 2021-2025 [TEXT] BackgroundArtificial intelligence and 0.292, respectively.
The colorectal surgeons revealed an under-detection score of 0.80 (± 0.47), an
over-detection score of 0.58 (± 0.41), and a usefulness evaluation score of 3.38
(± 0.43). The nerve recognition scores of non-colorectal surgeons, rotating residents,
and medical students significantly improved by simply watching the AI nerve recognition
videos for 1 min. Notably, medical students showed a more substantial increase
in nerve recognition scores when exposed to AI nerve analysis videos than when
exposed to traditional lectures on nerves.ConclusionsIn laparoscopic and robot-assisted
rectal cancer surgeries, the AI-based nerve recognition model achieved satisfactory
recognition levels for expert surgeons and demonstrated effectiveness in educating
junior surgeons and medical students on nerve recognition.Supplementary InformationThe
online version contains supplementary material available at 10.1007/s00464-024-10939-z.'
- '[YEAR_RANGE] 2021-2025 [TEXT] Sialodochitis fibrinosa is a rare disease characterized
by paroxysmal swelling of the salivary glands and discharge of fibrous masses
containing eosinophils from the salivary gland orifice. Diagnosis was traditionally
based on irregular dilation of the main salivary duct by sialography, but now
includes the imaging findings of magnetic resonance imaging (MRI). In the present
patient, short TI inversion recovery (STIR) MRI sequence was able to identify
Stensen''s duct dilation and additionally depict cystic dilation due to stenosis
of the orifice and multiple cystic dilations within the parotid gland body. Treatment
was performed on each of the lesion sites identified by MRI. The patient was successfully
treated with compressive gland massage for lesions within the body of the parotid,
and bougienage was performed for stenosis of Stensen''s duct orifice, with duct
flushing for dilation of Stensen''s duct. These findings suggest that MRI could
replace sialography and has the advantages of being noninvasive, having a wide
observation area, and enabling observation within the glandular body. Here, we
report the case of a patient in whom accurate identification of the site of the
lesion enabled selection of appropriate treatment for each site.'
- '[YEAR_RANGE] 2021-2025 [TEXT] Objective To explore the value of the injury severity
score curve (AUC) and Hosmer‒Lemeshow (H-L) statistic. Results A total of 310
patients were included. ISS and NISS of patients with complications and poor prognoses
were greater than those of patients without complications and poor prognoses,
respectively. The discrimination of ISS in predicting pneumonia, respiratory failure,
in-hospital tracheal intubation, extended length of hospital stay, ICU admission,
prolonged ICU stay, and death (AUCs: 0.609, 0.721, 0.848, 0.784, 0.763, 0.716,
and 0.804, respectively) was not statistically significantly different from that
of NISS in predicting the corresponding outcomes (AUCs: 0.628, 0.712, 0.795, 0.767,
0.750, 0.750, and 0.818, respectively). ISS showed better calibration than NISS
for predicting pneumonia, respiratory failure, in-hospital tracheal intubation,
extended length of hospital stay, and ICU admission but worse calibration for
predicting prolonged ICU stay and death. Conclusion ISS and NISS are both suitable
for injury evaluation. There was no statistically significant difference in discrimination
between ISS and NISS, but they had different calibrations when predicting different
outcomes.'
- source_sentence: '[YEAR_RANGE] 2021-2025 [TEXT] Combined hyperglycemic crises in
adult patients already exist in Latin America.'
sentences:
- '[YEAR_RANGE] 2021-2025 [TEXT] AbstractIntroduction. Diabetes mellitus is one
of the most common diseases worldwide, with a high morbidity and mortality rate.
Its prevalence has been increasing, as well as its acute complications, such as
hyperglycemic crises. Hyperglycemic crises can present with combined features
of diabetic ketoacidosis and hyperosmolar state. However, their implications are
not fully understood.Objective. To describe the characteristics, outcomes, and
complications of the diabetic population with hyperglycemic crises and to value
the combined state in the Latin American population.Materials and methods. Retrospective
observational study of all hyperglycemic crises treated in the intensive care
unit of the Fundación Valle del Lili between January 1, 2015, and December 31,
2020. Descriptive analysis and prevalence ratio estimation for deaths were performed
using the robust Poisson regression method.Results. There were 317 patients with
confirmed hyperglycemic crises, 43 (13.56%) with diabetic ketoacidosis, 9 (2.83%)
in hyperosmolar state, and 265 (83.59%) with combined diabetic ketoacidosis and
hyperosmolar state. Infection was the most frequent triggering cause (52.52%).
Fatalities due to ketoacidosis occurred in four patients (9.30%) and combined
diabetic ketoacidosis/hyperosmolar state in 22 patients (8.30%); no patient had
a hyperosmolar state. Mechanical ventilation was associated with death occurrence
(adjusted PR = 1.15; 95 % CI 95 = 1.06 - 1.24).Conclusions. The combined state
was the most prevalent presentation of the hyperglycemic crisis, with a mortality
rate similar to diabetic ketoacidosis. Invasive mechanical ventilation was associated
with a higher occurrence of death.'
- '[YEAR_RANGE] 2021-2025 [TEXT] Impactful research on refugee mental health is
urgently needed. To mitigate the growing refugee crisis, researchers and clinicians
seek to better understand the relationship between trauma, grief and post-migration
factors with the aim of bringing better awareness, more resources and improved
support for these communities and individuals living in host countries. As much
as this is our intention, the prevailing research methods, that is, online anonymous
questionnaires, used to engage refugees in mental health research are increasingly
outdated and lack inclusivity and representation. With this perspective piece,
we would like to highlight a growing crisis in global mental health research;
the predominance of a Global North-centric approach and methodology. We use our
recent research challenges and breakdowns as a learning example and possible opportunity
to rebuild our research practice in a more ethical and equitable way.'
- '[YEAR_RANGE] 2021-2025 [TEXT] Carbon capture and utilization (CCU) covers an
array of technologies for valorizing carbon dioxide (CO2). To date, most mature
CCU technology conducted with capture agents operates against the CO2 gradient
to desorb CO2 from capture agents, exhibiting high energy penalties and thermal
degradation due to the requirement for thermal swings. This Perspective presents
a concept of Bio-Integrated Carbon Capture and Utilization (BICCU), which utilizes
methanogens for integrated release and conversion of CO2 captured with capture
agents. BICCU hereby substitutes the energy-intensive desorption with microbial
conversion of captured CO2 by the methanogenic CO2-reduction pathway, utilizing
green hydrogen to generate non-fossil methane. Existing carbon capture and utilization
technologies are hindered by significant energy penalties. Here, the authors discuss
the Bio-Integrated Carbon Capture and Utilization (BICCU) technology, which mitigates
the energy penalties while generating valuable C1 and C2 products.'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained on the parquet dataset. It maps sentences & paragraphs to a 512-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 512 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- parquet
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 512, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("pankajrajdeo/Bioformer-8L-UMLS-Pubmed_PMC-ST-TCE-Epoch-1")
# Run inference
sentences = [
'[YEAR_RANGE] 2021-2025 [TEXT] Combined hyperglycemic crises in adult patients already exist in Latin America.',
'[YEAR_RANGE] 2021-2025 [TEXT] AbstractIntroduction. Diabetes mellitus is one of the most common diseases worldwide, with a high morbidity and mortality rate. Its prevalence has been increasing, as well as its acute complications, such as hyperglycemic crises. Hyperglycemic crises can present with combined features of diabetic ketoacidosis and hyperosmolar state. However, their implications are not fully understood.Objective. To describe the characteristics, outcomes, and complications of the diabetic population with hyperglycemic crises and to value the combined state in the Latin American population.Materials and methods. Retrospective observational study of all hyperglycemic crises treated in the intensive care unit of the Fundación Valle del Lili between January 1, 2015, and December 31, 2020. Descriptive analysis and prevalence ratio estimation for deaths were performed using the robust Poisson regression method.Results. There were 317 patients with confirmed hyperglycemic crises, 43 (13.56%) with diabetic ketoacidosis, 9 (2.83%) in hyperosmolar state, and 265 (83.59%) with combined diabetic ketoacidosis and hyperosmolar state. Infection was the most frequent triggering cause (52.52%). Fatalities due to ketoacidosis occurred in four patients (9.30%) and combined diabetic ketoacidosis/hyperosmolar state in 22 patients (8.30%); no patient had a hyperosmolar state. Mechanical ventilation was associated with death occurrence (adjusted PR = 1.15; 95 % CI 95 = 1.06 - 1.24).Conclusions. The combined state was the most prevalent presentation of the hyperglycemic crisis, with a mortality rate similar to diabetic ketoacidosis. Invasive mechanical ventilation was associated with a higher occurrence of death.',
'[YEAR_RANGE] 2021-2025 [TEXT] Carbon capture and utilization (CCU) covers an array of technologies for valorizing carbon dioxide (CO2). To date, most mature CCU technology conducted with capture agents operates against the CO2 gradient to desorb CO2 from capture agents, exhibiting high energy penalties and thermal degradation due to the requirement for thermal swings. This Perspective presents a concept of Bio-Integrated Carbon Capture and Utilization (BICCU), which utilizes methanogens for integrated release and conversion of CO2 captured with capture agents. BICCU hereby substitutes the energy-intensive desorption with microbial conversion of captured CO2 by the methanogenic CO2-reduction pathway, utilizing green hydrogen to generate non-fossil methane. Existing carbon capture and utilization technologies are hindered by significant energy penalties. Here, the authors discuss the Bio-Integrated Carbon Capture and Utilization (BICCU) technology, which mitigates the energy penalties while generating valuable C1 and C2 products.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 512]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### parquet
* Dataset: parquet
* Size: 6,150,902 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 39.88 tokens</li><li>max: 112 tokens</li></ul> | <ul><li>min: 32 tokens</li><li>mean: 277.54 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor | positive |
|:--------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>[YEAR_RANGE] 1896-1900 [TEXT] ON THE PIGMENT OF THE NEGRO'S SKIN AND HAIR</code> | <code>[YEAR_RANGE] 1896-1900 [TEXT] The pigmentary granules of the negro's skin and hair can be freed in several ways from the cells in which they are lodged and collected in any desired amount. As thus obtained, these granules are found to be insoluble in dilute alkalies, dilute hydrochloric acid (hot or cold), alcohol, or other organic solvents when applied in the order named. If, after they have been subjected to the action of dilute hydrochloric acid, they are again treated with dilute alkalies, they are found to give up their pigment, and, on the continued application of heat, the granules dissolve entirely in the alkaline solution, leaving only an insignificant residue. The pigmentary granules are composed of a colourless ground substance or substratum, a pigment, and much inorganic matter. Their inorganic constituents, as thus far determined, are calcium, magnesium, iron, and silicic, phosphoric, and sulphuric acids; and these constituents possibly play an important part in the deposi...</code> |
| <code>[YEAR_RANGE] 1896-1900 [TEXT] THE HISTOLOGIGAL LESIONS OF ACUTE GLANDERS IN MAN AND OF EXPERIMENTAL GLANDERS IN THE GUINEA-PIG</code> | <code>[YEAR_RANGE] 1896-1900 [TEXT] The glanders nodule in the class of cases studied by us is in no sense analogous to the miliary tubercle in its histogenesis, and our studies afford no support to Baumgarten's views. The primary effect of the bacillus of glanders on a tissue we found to be not a production of epithelioid cells, which undergo necrosis and invasion by leucocytes, as happens in the cases in which the bacillus of tuberculosis is concerned, but to be the production of primary necrosis of the tissue, followed by inflammatory exudation, often of a suppurative character. Degenerative changes rapidly ensue in the inflammatory products. These conclusions are in harmony with the observations of Tedeschi, above referred to.</code> |
| <code>[YEAR_RANGE] 1896-1900 [TEXT] THE EFFECT OF ODOURS, IRRITANT VAPOURS, AND MENTAL WORK UPON THE BLOOD FLOW</code> | <code>[YEAR_RANGE] 1896-1900 [TEXT] The most important of this investigation has been the completion of various improvements in the construction and use of the plethysmograph, by means of which numerous errors attending the use of the instrument have been eliminated. The results of the work show that all olfactory sensations, so far as they produce any effect through the vasomotor system, tend to diminish the volume of the arm, and therefore presumably cause a congestion of the brain. Whenever the stimulation occassions an increase in the volume of the arm, as sometimes happens, it seems to be due to acceleration of the heart rate, which, of course, tends also to increase the supply of blood to the brain. The of odours varies in extent with different individuals, and with the same individual at different times. It was most marked in subjects sensitive to odours. Irritant vapours, such as formic acid, have a marked effect in the same direction—that is, they cause a strong diminution in the vo...</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### parquet
* Dataset: parquet
* Size: 6,150,902 evaluation samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 28.46 tokens</li><li>max: 61 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 303.55 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor | positive |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>[YEAR_RANGE] 2021-2025 [TEXT] Construction of Metal/Zeolite Hybrid Nanoframe Reactors via</code> | <code>[YEAR_RANGE] 2021-2025 [TEXT] Metal/zeolite hybrid nanoframes featuring highly accessible compartmental environments, abundant heterogeneous interfaces, and diverse chemical compositions are expected to possess significant potential for heterogeneous catalysis, yet their general synthetic methodology has not yet been established. In this study, we developed a two-step in-situ-kinetics transformation approach to prepare metal/ZSM-5 hybrid nanoframes with exceptionally open nanostructures, tunable metal compositions, and abundant accessible active sites. Initially, the process involved the formation of single-crystalline ZSM-5 nanoframes through an anisotropic etching and recrystallization kinetic transformation process. Subsequently, through an in situ reaction of the Ni2+ ions and the silica species etched from ZSM-5 nanoframes, layered nickel silicate emerged on both the inner and outer surfaces of the zeolite nanoframes. Upon reduction under a hydrogen atmosphere, well-dispersed Ni n...</code> |
| <code>[YEAR_RANGE] 2021-2025 [TEXT] Genome-wide sRNA and mRNA transcriptomic profiling insights into carbapenem-resistant</code> | <code>[YEAR_RANGE] 2021-2025 [TEXT] Introduction Acinetobacter baumannii (AB) is rising as a human pathogen of critical priority worldwide as it is the leading cause of opportunistic infections in healthcare settings and carbapenem-resistant AB is listed as a “super bacterium” or “priority pathogen for drug resistance” by the World Health Organization.MethodsClinical isolates of A. baumannii were collected and tested for antimicrobial susceptibility. Among them, carbapenem-resistant and carbapenem-sensitive A. baumannii were subjected to prokaryotic transcriptome sequencing. The change of sRNA and mRNA expression was analyzed by bioinformatics and validated by quantitative reverse transcription-PCR.ResultsA total of 687 clinical isolates were collected, of which 336 strains of A. baumannii were resistant to carbapenem. Five hundred and six differentially expressed genes and nineteen differentially expressed sRNA candidates were discovered through transcriptomic profile analysis between carba...</code> |
| <code>[YEAR_RANGE] 2021-2025 [TEXT] Evaluation and modeling of diaphragm displacement using ultrasound imaging for wearable respiratory assistive robot</code> | <code>[YEAR_RANGE] 2021-2025 [TEXT] IntroductionAssessing the influence of respiratory assistive devices on the diaphragm mobility is essential for advancing patient care and improving treatment outcomes. Existing respiratory assistive robots have not yet effectively assessed their impact on diaphragm mobility. In this study, we introduce for the first time a non-invasive, real-time clinically feasible ultrasound method to evaluate the impact of soft wearable robots on diaphragm displacement.MethodsWe measured and compared diaphragm displacement and lung volume in eight participants during both spontaneous and robotic-assisted respiration. Building on these measurements, we proposed a human-robot coupled two-compartment respiratory mechanics model that elucidates the underlying mechanism by which our extracorporeal wearable robots augments respiration. Specifically, the soft robot applies external compression to the abdominal wall muscles, inducing their inward movement, which consequently p...</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 128
- `learning_rate`: 2e-05
- `num_train_epochs`: 2
- `max_steps`: 91302
- `log_level`: info
- `fp16`: True
- `dataloader_num_workers`: 16
- `load_best_model_at_end`: True
- `resume_from_checkpoint`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 2
- `max_steps`: 91302
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: info
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 16
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: True
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:-----:|:-------------:|:---------------:|
| 0.0000 | 1 | 2.7287 | - |
| 0.0219 | 1000 | 0.3483 | - |
| 0.0438 | 2000 | 0.1075 | - |
| 0.0657 | 3000 | 0.085 | - |
| 0.0876 | 4000 | 0.0808 | - |
| 0.1095 | 5000 | 0.0707 | - |
| 0.1314 | 6000 | 0.0702 | - |
| 0.1533 | 7000 | 0.0675 | - |
| 0.1752 | 8000 | 0.0549 | - |
| 0.1971 | 9000 | 0.0616 | - |
| 0.2190 | 10000 | 0.0616 | - |
| 0.2410 | 11000 | 0.0548 | - |
| 0.2629 | 12000 | 0.0584 | - |
| 0.2848 | 13000 | 0.0554 | - |
| 0.3067 | 14000 | 0.0533 | - |
| 0.3286 | 15000 | 0.0485 | - |
| 0.3505 | 16000 | 0.0545 | - |
| 0.3724 | 17000 | 0.0579 | - |
| 0.3943 | 18000 | 0.0645 | - |
| 0.4162 | 19000 | 0.0461 | - |
| 0.4381 | 20000 | 0.0604 | - |
| 0.4600 | 21000 | 0.054 | - |
| 0.4819 | 22000 | 0.0481 | - |
| 0.5038 | 23000 | 0.0525 | - |
| 0.5257 | 24000 | 0.0497 | - |
| 0.5476 | 25000 | 0.0492 | - |
| 0.5695 | 26000 | 0.0428 | - |
| 0.5914 | 27000 | 0.0411 | - |
| 0.6133 | 28000 | 0.0356 | - |
| 0.6352 | 29000 | 0.0421 | - |
| 0.6571 | 30000 | 0.0369 | - |
| 0.6791 | 31000 | 0.0384 | - |
| 0.7010 | 32000 | 0.0395 | - |
| 0.7229 | 33000 | 0.0413 | - |
| 0.7448 | 34000 | 0.0375 | - |
| 0.7667 | 35000 | 0.0373 | - |
| 0.7886 | 36000 | 0.0347 | - |
| 0.8105 | 37000 | 0.039 | - |
| 0.8324 | 38000 | 0.0368 | - |
| 0.8543 | 39000 | 0.0365 | - |
| 0.8762 | 40000 | 0.0333 | - |
| 0.8981 | 41000 | 0.036 | - |
| 0.9200 | 42000 | 0.0384 | - |
| 0.9419 | 43000 | 0.0347 | - |
| 0.9638 | 44000 | 0.0358 | - |
| 0.9857 | 45000 | 0.0355 | - |
| 1.0000 | 45651 | - | 0.0044 |
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.4.1
- Transformers: 4.48.2
- PyTorch: 2.6.0+cu124
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
lesso10/02a78887-e22e-4b53-bdc9-8d40bf154992 | lesso10 | "2025-01-25T12:54:02Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:Intel/neural-chat-7b-v3-3",
"base_model:adapter:Intel/neural-chat-7b-v3-3",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-25T12:41:03Z" | ---
library_name: peft
license: apache-2.0
base_model: Intel/neural-chat-7b-v3-3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 02a78887-e22e-4b53-bdc9-8d40bf154992
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Intel/neural-chat-7b-v3-3
bf16: true
chat_template: llama3
datasets:
- data_files:
- a8897e19ee045d4f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a8897e19ee045d4f_train_data.json
type:
field_instruction: INSTRUCTION
field_output: RESPONSE
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: lesso10/02a78887-e22e-4b53-bdc9-8d40bf154992
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/a8897e19ee045d4f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ef70b395-1c8f-419f-af46-58a046d20b33
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ef70b395-1c8f-419f-af46-58a046d20b33
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 02a78887-e22e-4b53-bdc9-8d40bf154992
This model is a fine-tuned version of [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0010 | 1 | nan |
| 0.0 | 0.0049 | 5 | nan |
| 0.0 | 0.0098 | 10 | nan |
| 0.0 | 0.0147 | 15 | nan |
| 0.0 | 0.0196 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
furrutiav/modernbert_mixtral_nllfg_vanilla_qnli_none_naive | furrutiav | "2025-03-23T02:48:24Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"modernbert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2025-03-23T02:47:39Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DBangshu/V4_Base_GPT2_e5_0_7 | DBangshu | "2024-11-29T15:36:27Z" | 132 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-29T15:36:12Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
spsither/wav2vec2_run9.40 | spsither | "2024-02-11T12:26:46Z" | 63 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-02-11T12:26:16Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
baby-dev/5dea7951-b485-4738-94db-af1aa7b264cf | baby-dev | "2025-03-15T01:34:47Z" | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:NousResearch/Yarn-Llama-2-13b-64k",
"base_model:adapter:NousResearch/Yarn-Llama-2-13b-64k",
"region:us"
] | null | "2025-03-15T01:34:19Z" | ---
library_name: peft
tags:
- generated_from_trainer
base_model: NousResearch/Yarn-Llama-2-13b-64k
model-index:
- name: baby-dev/5dea7951-b485-4738-94db-af1aa7b264cf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# baby-dev/5dea7951-b485-4738-94db-af1aa7b264cf
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
tanganke/clip-vit-base-patch16_oxford_flowers102 | tanganke | "2024-12-13T02:42:07Z" | 103 | 0 | null | [
"tensorboard",
"safetensors",
"clip_vision_model",
"dataset:dpdl-benchmark/oxford_flowers102",
"base_model:openai/clip-vit-base-patch16",
"base_model:finetune:openai/clip-vit-base-patch16",
"region:us"
] | null | "2024-12-13T02:41:45Z" | ---
base_model:
- openai/clip-vit-base-patch16
datasets:
- dpdl-benchmark/oxford_flowers102
metrics:
- accuracy
---
# Model Card
## Training Details
Adam Optimizer with a constant learning rate 1e-5 for 4000 steps training (batch_size=128).
Only the vision encoder is fine-tuned.
## Evaluation Results
Test set accuracy:
- pre-trained: 0.7131240963935852
- fine-tuned: 0.948772132396698 |
levi-chai-shop/eren-yeager | levi-chai-shop | "2023-10-01T20:34:42Z" | 9 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-10-01T20:28:10Z" | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### eren_yeager Dreambooth model trained by levi-chai-shop following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: IIITS-4
Sample pictures of this concept:
|
Triangle104/FuseO1-DeepSeekR1-QwQ-32B-Preview-Q3_K_L-GGUF | Triangle104 | "2025-02-01T08:57:26Z" | 27 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview",
"base_model:quantized:FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-01T08:54:32Z" | ---
license: apache-2.0
base_model: FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/FuseO1-DeepSeekR1-QwQ-32B-Preview-Q3_K_L-GGUF
This model was converted to GGUF format from [`FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview`](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview) for more details on the model.
---
FuseO1-Preview is our initial endeavor to enhance the System-II reasoning capabilities of large language models (LLMs) through innovative model fusion techniques. By employing our advanced SCE merging methodologies, we integrate multiple open-source o1-like LLMs into a unified model. Our goal is to incorporate the distinct knowledge and strengths from different reasoning LLMs into a single, unified model with strong System-II reasoning abilities, particularly in mathematics, coding, and science domains.
To achieve this, we conduct two types of model merging:
Long-Long Reasoning Merging: This approach involves model fusion across LLMs that utilize long-CoT reasoning, with the goal of enhancing long-CoT reasoning capabilities. The resulted FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview achieves a Pass@1 accuracy of 74.0 on AIME24, demonstrating significant performance improvements compared to the OpenAI o1-preview (44.6) and OpenAI o1-mini (63.4), even approaching OpenAI o1 (79.2).
Long-Short Reasoning Merging: This approach involves model fusion between long-CoT and short-CoT LLMs, aiming to improve reasoning capabilities in both long and short reasoning processes. The resulted FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview and FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview is capable of utilizing both long and short reasoning processes and demonstrates relatively strong performance in long reasoning tasks.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/FuseO1-DeepSeekR1-QwQ-32B-Preview-Q3_K_L-GGUF --hf-file fuseo1-deepseekr1-qwq-32b-preview-q3_k_l.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/FuseO1-DeepSeekR1-QwQ-32B-Preview-Q3_K_L-GGUF --hf-file fuseo1-deepseekr1-qwq-32b-preview-q3_k_l.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/FuseO1-DeepSeekR1-QwQ-32B-Preview-Q3_K_L-GGUF --hf-file fuseo1-deepseekr1-qwq-32b-preview-q3_k_l.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/FuseO1-DeepSeekR1-QwQ-32B-Preview-Q3_K_L-GGUF --hf-file fuseo1-deepseekr1-qwq-32b-preview-q3_k_l.gguf -c 2048
```
|
aXhyra/demo_irony_31415 | aXhyra | "2021-12-13T17:54:43Z" | 6 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-02T23:29:05Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: demo_irony_31415
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: irony
metrics:
- name: F1
type: f1
value: 0.685764300192161
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# demo_irony_31415
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2905
- F1: 0.6858
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.7735294032820418e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 358 | 0.5872 | 0.6786 |
| 0.5869 | 2.0 | 716 | 0.6884 | 0.6952 |
| 0.3417 | 3.0 | 1074 | 0.9824 | 0.6995 |
| 0.3417 | 4.0 | 1432 | 1.2905 | 0.6858 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
YoelCanaza/distilroberta-base-mrpc-glue-yoel-c | YoelCanaza | "2024-01-30T08:33:11Z" | 94 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-01-30T08:27:54Z" | ---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilroberta-base-mrpc-glue-yoel-c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-mrpc-glue-yoel-c
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the datasetX dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6408
- Accuracy: 0.8358
- F1: 0.8780
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5147 | 1.09 | 500 | 0.7097 | 0.8211 | 0.8765 |
| 0.3542 | 2.18 | 1000 | 0.6408 | 0.8358 | 0.8780 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.13.3
|
mradermacher/gpt-2-health-faq-i1-GGUF | mradermacher | "2025-03-01T01:00:07Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:20MIA1140/gpt-2-health-faq",
"base_model:quantized:20MIA1140/gpt-2-health-faq",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | "2025-03-01T00:53:47Z" | ---
base_model: 20MIA1140/gpt-2-health-faq
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/20MIA1140/gpt-2-health-faq
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/gpt-2-health-faq-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gpt-2-health-faq-i1-GGUF/resolve/main/gpt-2-health-faq.i1-IQ1_S.gguf) | i1-IQ1_S | 0.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/gpt-2-health-faq-i1-GGUF/resolve/main/gpt-2-health-faq.i1-IQ1_M.gguf) | i1-IQ1_M | 0.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/gpt-2-health-faq-i1-GGUF/resolve/main/gpt-2-health-faq.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt-2-health-faq-i1-GGUF/resolve/main/gpt-2-health-faq.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt-2-health-faq-i1-GGUF/resolve/main/gpt-2-health-faq.i1-IQ2_S.gguf) | i1-IQ2_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt-2-health-faq-i1-GGUF/resolve/main/gpt-2-health-faq.i1-IQ2_M.gguf) | i1-IQ2_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt-2-health-faq-i1-GGUF/resolve/main/gpt-2-health-faq.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.2 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/gpt-2-health-faq-i1-GGUF/resolve/main/gpt-2-health-faq.i1-Q2_K.gguf) | i1-Q2_K | 0.2 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/gpt-2-health-faq-i1-GGUF/resolve/main/gpt-2-health-faq.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gpt-2-health-faq-i1-GGUF/resolve/main/gpt-2-health-faq.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt-2-health-faq-i1-GGUF/resolve/main/gpt-2-health-faq.i1-IQ3_S.gguf) | i1-IQ3_S | 0.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/gpt-2-health-faq-i1-GGUF/resolve/main/gpt-2-health-faq.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/gpt-2-health-faq-i1-GGUF/resolve/main/gpt-2-health-faq.i1-IQ3_M.gguf) | i1-IQ3_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt-2-health-faq-i1-GGUF/resolve/main/gpt-2-health-faq.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/gpt-2-health-faq-i1-GGUF/resolve/main/gpt-2-health-faq.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/gpt-2-health-faq-i1-GGUF/resolve/main/gpt-2-health-faq.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt-2-health-faq-i1-GGUF/resolve/main/gpt-2-health-faq.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/gpt-2-health-faq-i1-GGUF/resolve/main/gpt-2-health-faq.i1-Q4_0.gguf) | i1-Q4_0 | 0.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/gpt-2-health-faq-i1-GGUF/resolve/main/gpt-2-health-faq.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/gpt-2-health-faq-i1-GGUF/resolve/main/gpt-2-health-faq.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gpt-2-health-faq-i1-GGUF/resolve/main/gpt-2-health-faq.i1-Q4_1.gguf) | i1-Q4_1 | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt-2-health-faq-i1-GGUF/resolve/main/gpt-2-health-faq.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt-2-health-faq-i1-GGUF/resolve/main/gpt-2-health-faq.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt-2-health-faq-i1-GGUF/resolve/main/gpt-2-health-faq.i1-Q6_K.gguf) | i1-Q6_K | 0.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
RichardErkhov/lidiya_-_bart-large-xsum-samsum-4bits | RichardErkhov | "2024-05-09T19:22:11Z" | 77 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-05-09T19:21:25Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
bart-large-xsum-samsum - bnb 4bits
- Model creator: https://huggingface.co/lidiya/
- Original model: https://huggingface.co/lidiya/bart-large-xsum-samsum/
Original model description:
---
language: en
tags:
- bart
- seq2seq
- summarization
license: apache-2.0
datasets:
- samsum
widget:
- text: |
Hannah: Hey, do you have Betty's number?
Amanda: Lemme check
Amanda: Sorry, can't find it.
Amanda: Ask Larry
Amanda: He called her last time we were at the park together
Hannah: I don't know him well
Amanda: Don't be shy, he's very nice
Hannah: If you say so..
Hannah: I'd rather you texted him
Amanda: Just text him 🙂
Hannah: Urgh.. Alright
Hannah: Bye
Amanda: Bye bye
model-index:
- name: bart-large-xsum-samsum
results:
- task:
name: Abstractive Text Summarization
type: abstractive-text-summarization
dataset:
name: "SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization"
type: samsum
metrics:
- name: Validation ROUGE-1
type: rouge-1
value: 54.3921
- name: Validation ROUGE-2
type: rouge-2
value: 29.8078
- name: Validation ROUGE-L
type: rouge-l
value: 45.1543
- name: Test ROUGE-1
type: rouge-1
value: 53.3059
- name: Test ROUGE-2
type: rouge-2
value: 28.355
- name: Test ROUGE-L
type: rouge-l
value: 44.0953
---
## `bart-large-xsum-samsum`
This model was obtained by fine-tuning `facebook/bart-large-xsum` on [Samsum](https://huggingface.co/datasets/samsum) dataset.
## Usage
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="lidiya/bart-large-xsum-samsum")
conversation = '''Hannah: Hey, do you have Betty's number?
Amanda: Lemme check
Amanda: Sorry, can't find it.
Amanda: Ask Larry
Amanda: He called her last time we were at the park together
Hannah: I don't know him well
Amanda: Don't be shy, he's very nice
Hannah: If you say so..
Hannah: I'd rather you texted him
Amanda: Just text him 🙂
Hannah: Urgh.. Alright
Hannah: Bye
Amanda: Bye bye
'''
summarizer(conversation)
```
## Training procedure
- Colab notebook: https://colab.research.google.com/drive/1dul0Sg-TTMy9xZCJzmDRajXbyzDwtYx6?usp=sharing
## Results
| key | value |
| --- | ----- |
| eval_rouge1 | 54.3921 |
| eval_rouge2 | 29.8078 |
| eval_rougeL | 45.1543 |
| eval_rougeLsum | 49.942 |
| test_rouge1 | 53.3059 |
| test_rouge2 | 28.355 |
| test_rougeL | 44.0953 |
| test_rougeLsum | 48.9246 |
|
keylazy/Llama-2-7b-chat-hf-ark-ft | keylazy | "2023-11-10T23:35:44Z" | 12 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-classification",
"generated_from_trainer",
"base_model:keylazy/Llama-2-7b-chat-hf-ark",
"base_model:finetune:keylazy/Llama-2-7b-chat-hf-ark",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-11-09T04:44:18Z" | ---
base_model: keylazy/Llama-2-7b-chat-hf-ark
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: Llama-2-7b-chat-hf-ark-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-chat-hf-ark-ft
This model is a fine-tuned version of [keylazy/Llama-2-7b-chat-hf-ark](https://huggingface.co/keylazy/Llama-2-7b-chat-hf-ark) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1845
- Accuracy: 0.9435
- Precision: 0.9435
- Recall: 0.9435
- F1: 0.9434
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.1635 | 0.5 | 3828 | 0.1612 | 0.9267 | 0.9270 | 0.9267 | 0.9267 |
| 0.1302 | 1.0 | 7656 | 0.1330 | 0.9424 | 0.9429 | 0.9424 | 0.9423 |
| 0.0352 | 1.5 | 11484 | 0.1845 | 0.9435 | 0.9435 | 0.9435 | 0.9434 |
| 0.0316 | 2.0 | 15312 | 0.1851 | 0.9428 | 0.9429 | 0.9428 | 0.9428 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
jvelja/ppo-distilbert-base-uncased-epoch-30 | jvelja | "2024-07-26T13:28:28Z" | 45 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | reinforcement-learning | "2024-07-26T13:28:24Z" | ---
license: apache-2.0
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="jvelja//tmp/tmprgxxeirx/jvelja/ppo-distilbert-base-uncased-epoch-30")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("jvelja//tmp/tmprgxxeirx/jvelja/ppo-distilbert-base-uncased-epoch-30")
model = AutoModelForCausalLMWithValueHead.from_pretrained("jvelja//tmp/tmprgxxeirx/jvelja/ppo-distilbert-base-uncased-epoch-30")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
Sicarius-Prototyping/L3.3_RP_Experiment | Sicarius-Prototyping | "2024-12-19T04:07:51Z" | 11 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"base_model:meta-llama/Llama-3.3-70B-Instruct",
"base_model:finetune:meta-llama/Llama-3.3-70B-Instruct",
"license:llama3.3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-18T19:56:02Z" | ---
license: llama3.3
base_model:
- meta-llama/Llama-3.3-70B-Instruct
library_name: transformers
--- |
parabolicx/peapods | parabolicx | "2024-09-30T16:56:45Z" | 66 | 2 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"flux",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2024-09-30T16:12:23Z" | ---
tags:
- text-to-image
- lora
- diffusers
- flux
widget:
- text: >-
a 3D render of a green samurai, wearing samurai gear with long samurai ponytail, holding a sword, no ears. Cherry blossoms in the background with japanese style homes, in the style of $PEAS
output:
url: samurai.jpg
- text: a 3D render of a mathemetician peabro, standing in front of a chalkboard, holding a triangle,. Wearing glasses. Slicked back dark green hair. wearing light grey robes. The chalkboard says 'a2 + b2 = c2'
output:
url: peathagarus.jpg
- text: a 3D render of a green peabro boxer, wearing a red and gold championship belt, with red gloves, wearing a boxing rob, standing in a boxing ring, large crowd in the background, in the style of $PEAS
output:
url: champean.jpg
- text: a 3D render of a green pirate, wearing a pirate outfit with eyepatch and pirate hat, holding sword, with a red parrot on his shoulder. Has a peg leg. standing on a ship with the ocean in the background, in the style of $PEAS.
output:
url: pearate.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: in the style of $PEAS
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# Peapods / Peabro Model
Flux LoRA for testing purposes, trained on Peabro
## Trigger words
You should use `in the style of $PEAS` and `peabro` to trigger the image generation
## Example prompts
a 3D render of a green samurai, wearing samurai gear with long samurai ponytail, holding a sword, no ears. Cherry blossoms in the background with japanese style homes, in the style of $PEAS
a 3D render of a green pirate, wearing a pirate outfit with eyepatch and pirate hat, holding sword, with a red parrot on his shoulder. Has a peg leg. standing on a ship with the ocean in the background, in the style of $PEAS.
a 3D render of a green peabro boxer, wearing a red and gold championship belt, with red gloves, wearing a boxing rob, standing in a boxing ring, large crowd in the background, in the style of $PEAS
a 3D render of a green peabro magician, wearing a black suit and black cape, holding a magician's wand and holding a top-hat with a fluffy blue rabbit inside of it, standing on a stage with stage lighting, in the style of $PEAS
a 3D render of peabro wearing a vampire costume, with vampire teeth, holding a jack-o-lantern full of peas. The background is a spooky neighborhood with fog and depth of field. Night time, in the style of $PEAS
a 3D render of green peabro king with white gold, jeweled crown. He is wearing a luxurious white cloth robes and holds a white gold ornate staff. At the top of his staff is a green glowing orb. He looks confident and dignified, in the style of $PEAS
<Gallery /> |
Bhardawaj/slc-opt-125-gptq | Bhardawaj | "2024-05-28T05:42:13Z" | 77 | 0 | transformers | [
"transformers",
"safetensors",
"opt",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2024-05-28T05:42:07Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
texanrangee/662f069c-bdee-4f6e-9ede-4bd96bf18fa1 | texanrangee | "2025-03-15T16:54:03Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-03-15T12:40:43Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Decoworship/lora_model_llama-3_beegol | Decoworship | "2024-05-10T15:27:02Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-05-10T15:26:30Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** Decoworship
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
waldie/Qwentile2.5-32B-Instruct-4bpw-h6-exl2 | waldie | "2025-01-04T21:46:07Z" | 20 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"base_model:maldv/Qwentile2.5-32B-Instruct",
"base_model:quantized:maldv/Qwentile2.5-32B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"exl2",
"region:us"
] | text-generation | "2025-01-04T21:13:51Z" | ---
license: apache-2.0
library_name: transformers
language:
- en
tags:
- chat
- conversational
base_model: maldv/Qwentile2.5-32B-Instruct
quantized_by: waldie
---

[imat quants](https://huggingface.co/mradermacher/Qwentile2.5-32B-Instruct-i1-GGUF)
# Qwentile 2.5 32B Instruct
Qwentile 2.5 32B Instruct is a *normalized denoised fourier interpolation* of the following models:
```yaml
output_base_model: "Qwen/Qwen2.5-32B"
finetune_merge:
- { "model": "AiCloser/Qwen2.5-32B-AGI", "base": "Qwen/Qwen2.5-32B", "alpha": 0.3 }
- { "model": "EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2", "base": "Qwen/Qwen2.5-32B", "alpha": 0.7 }
- { "model": "fblgit/TheBeagle-v2beta-32B-MGS", "base": "Qwen/Qwen2.5-32B", "alpha": 0.6 }
- { "model": "huihui-ai/Qwen2.5-32B-Instruct-abliterated", "base": "Qwen/Qwen2.5-32B-Instruct", "alpha": 1.0 }
- { "model": "huihui-ai/QwQ-32B-Preview-abliterated", "base": "Qwen/Qwen2.5-32B", "alpha": 1.0 }
- { "model": "Qwen/QwQ-32B-Preview", "base": "Qwen/Qwen2.5-32B", "alpha": 0.8, "is_input": true }
- { "model": "rombodawg/Rombos-LLM-V2.5-Qwen-32b", "base": "Qwen/Qwen2.5-32B", "alpha": 1.0, "is_output": true }
- { "model": "nbeerbower/Qwen2.5-Gutenberg-Doppel-32B", "base": "Qwen/Qwen2.5-32B-Instruct", "alpha": 0.4 }
```
In other words, all of these models get warped and interpolated in signal space, and then jammed back on top of the base model.
### What is this?
I started my experiment because of QwQ is a really nifty model, but it was giving me problems with xml output - which is what I use for my thought tokens. So, I thought... lets just merge it in!
The first model worked pretty well, but I got a sense that the balances could be tweaked. Why not throw in some other models as well for fun and see if I can't run out of disk space in the process?
### Initial Results
It's a little crispier than Awqward, but does generate stable output. Since it is based on Qwen2.5 base instead of instruct it did not fail the math test, it scores with models twice it's size:

## How to think
This model is very compliant to steering, and has innate chain of thought, so producing nice, formatted, chain of thought results is quite easy. Below is a very simple proof-of-concept example of how to achieve a thought turn.
#### System Prompt
```
You are my assistant. You should be helpful and use <thought></thought> xml.
All responses are in xml format.
```
#### One Shot
```
<thoughts>
<thought>Example thought</thought>
<idea>I have an idea</idea>
</thoughts>
```
#### Request
```
Think through how you render 2d polys at different scales.
```
#### Response
```
<response>
<thoughts>
<thought>When rendering 2D polygons at different scales, it's important to maintain their proportions and visual quality.</thought>
<idea>One approach could be using texture atlases for efficient scaling without losing detail.</idea>
<thought>Another idea is implementing level-of-detail (LOD) techniques to adjust polygon complexity based on scale.</thought>
<idea>We might also consider using shaders to handle dynamic scaling and apply effects smoothly.</idea>
</thoughts>
</response>
```
I did notice it sometimes drops trailing tags, so you should always validate (and if you are clever, repair) any structured responses.
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwentile2.5-32b-instruct,
title = {Qwentile 2.5 32B Instruct},
url = {https://huggingface.co/maldv/Qwentile2.5-32B-Instruct},
author = {Praxis Maldevide},
month = {December},
year = {2024}
}
``` |
lesso03/a70d0e3e-e006-4729-a8b6-8c2fcdc885a0 | lesso03 | "2025-02-09T00:30:34Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-14B",
"base_model:adapter:unsloth/Qwen2.5-14B",
"license:apache-2.0",
"region:us"
] | null | "2025-02-08T15:39:55Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-14B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a70d0e3e-e006-4729-a8b6-8c2fcdc885a0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# a70d0e3e-e006-4729-a8b6-8c2fcdc885a0
This model is a fine-tuned version of [unsloth/Qwen2.5-14B](https://huggingface.co/unsloth/Qwen2.5-14B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4256
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000203
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 407
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0025 | 1 | 1.7940 |
| 1.6343 | 0.1230 | 50 | 1.5928 |
| 1.622 | 0.2460 | 100 | 1.5606 |
| 1.6077 | 0.3690 | 150 | 1.5299 |
| 1.5909 | 0.4920 | 200 | 1.5081 |
| 1.5842 | 0.6150 | 250 | 1.4716 |
| 1.5487 | 0.7380 | 300 | 1.4472 |
| 1.5147 | 0.8610 | 350 | 1.4311 |
| 1.5145 | 0.9840 | 400 | 1.4256 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
baby-dev/ce016035-0a75-414d-a6c4-be311330c940 | baby-dev | "2025-02-07T05:20:01Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:adapter:microsoft/Phi-3.5-mini-instruct",
"license:mit",
"region:us"
] | null | "2025-02-07T01:23:41Z" | ---
library_name: peft
license: mit
base_model: microsoft/Phi-3.5-mini-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ce016035-0a75-414d-a6c4-be311330c940
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
# ce016035-0a75-414d-a6c4-be311330c940
This model is a fine-tuned version of [microsoft/Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0340
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
RichardErkhov/prometheus-eval_-_prometheus-7b-v2.0-gguf | RichardErkhov | "2024-05-21T13:38:46Z" | 157 | 0 | null | [
"gguf",
"arxiv:2405.01535",
"arxiv:2310.08491",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-05-21T10:45:07Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
prometheus-7b-v2.0 - GGUF
- Model creator: https://huggingface.co/prometheus-eval/
- Original model: https://huggingface.co/prometheus-eval/prometheus-7b-v2.0/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [prometheus-7b-v2.0.Q2_K.gguf](https://huggingface.co/RichardErkhov/prometheus-eval_-_prometheus-7b-v2.0-gguf/blob/main/prometheus-7b-v2.0.Q2_K.gguf) | Q2_K | 2.53GB |
| [prometheus-7b-v2.0.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/prometheus-eval_-_prometheus-7b-v2.0-gguf/blob/main/prometheus-7b-v2.0.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [prometheus-7b-v2.0.IQ3_S.gguf](https://huggingface.co/RichardErkhov/prometheus-eval_-_prometheus-7b-v2.0-gguf/blob/main/prometheus-7b-v2.0.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [prometheus-7b-v2.0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/prometheus-eval_-_prometheus-7b-v2.0-gguf/blob/main/prometheus-7b-v2.0.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [prometheus-7b-v2.0.IQ3_M.gguf](https://huggingface.co/RichardErkhov/prometheus-eval_-_prometheus-7b-v2.0-gguf/blob/main/prometheus-7b-v2.0.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [prometheus-7b-v2.0.Q3_K.gguf](https://huggingface.co/RichardErkhov/prometheus-eval_-_prometheus-7b-v2.0-gguf/blob/main/prometheus-7b-v2.0.Q3_K.gguf) | Q3_K | 3.28GB |
| [prometheus-7b-v2.0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/prometheus-eval_-_prometheus-7b-v2.0-gguf/blob/main/prometheus-7b-v2.0.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [prometheus-7b-v2.0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/prometheus-eval_-_prometheus-7b-v2.0-gguf/blob/main/prometheus-7b-v2.0.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [prometheus-7b-v2.0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/prometheus-eval_-_prometheus-7b-v2.0-gguf/blob/main/prometheus-7b-v2.0.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [prometheus-7b-v2.0.Q4_0.gguf](https://huggingface.co/RichardErkhov/prometheus-eval_-_prometheus-7b-v2.0-gguf/blob/main/prometheus-7b-v2.0.Q4_0.gguf) | Q4_0 | 3.83GB |
| [prometheus-7b-v2.0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/prometheus-eval_-_prometheus-7b-v2.0-gguf/blob/main/prometheus-7b-v2.0.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [prometheus-7b-v2.0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/prometheus-eval_-_prometheus-7b-v2.0-gguf/blob/main/prometheus-7b-v2.0.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [prometheus-7b-v2.0.Q4_K.gguf](https://huggingface.co/RichardErkhov/prometheus-eval_-_prometheus-7b-v2.0-gguf/blob/main/prometheus-7b-v2.0.Q4_K.gguf) | Q4_K | 4.07GB |
| [prometheus-7b-v2.0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/prometheus-eval_-_prometheus-7b-v2.0-gguf/blob/main/prometheus-7b-v2.0.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [prometheus-7b-v2.0.Q4_1.gguf](https://huggingface.co/RichardErkhov/prometheus-eval_-_prometheus-7b-v2.0-gguf/blob/main/prometheus-7b-v2.0.Q4_1.gguf) | Q4_1 | 4.24GB |
| [prometheus-7b-v2.0.Q5_0.gguf](https://huggingface.co/RichardErkhov/prometheus-eval_-_prometheus-7b-v2.0-gguf/blob/main/prometheus-7b-v2.0.Q5_0.gguf) | Q5_0 | 4.65GB |
| [prometheus-7b-v2.0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/prometheus-eval_-_prometheus-7b-v2.0-gguf/blob/main/prometheus-7b-v2.0.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [prometheus-7b-v2.0.Q5_K.gguf](https://huggingface.co/RichardErkhov/prometheus-eval_-_prometheus-7b-v2.0-gguf/blob/main/prometheus-7b-v2.0.Q5_K.gguf) | Q5_K | 4.78GB |
| [prometheus-7b-v2.0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/prometheus-eval_-_prometheus-7b-v2.0-gguf/blob/main/prometheus-7b-v2.0.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [prometheus-7b-v2.0.Q5_1.gguf](https://huggingface.co/RichardErkhov/prometheus-eval_-_prometheus-7b-v2.0-gguf/blob/main/prometheus-7b-v2.0.Q5_1.gguf) | Q5_1 | 5.07GB |
| [prometheus-7b-v2.0.Q6_K.gguf](https://huggingface.co/RichardErkhov/prometheus-eval_-_prometheus-7b-v2.0-gguf/blob/main/prometheus-7b-v2.0.Q6_K.gguf) | Q6_K | 5.53GB |
| [prometheus-7b-v2.0.Q8_0.gguf](https://huggingface.co/RichardErkhov/prometheus-eval_-_prometheus-7b-v2.0-gguf/blob/main/prometheus-7b-v2.0.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
tags:
- text2text-generation
datasets:
- prometheus-eval/Feedback-Collection
- prometheus-eval/Preference-Collection
license: apache-2.0
language:
- en
pipeline_tag: text2text-generation
library_name: transformers
metrics:
- pearsonr
- spearmanr
- kendall-tau
- accuracy
---
## Links for Reference
- **Homepage: In Progress**
- **Repository:https://github.com/prometheus-eval/prometheus-eval**
- **Paper:https://arxiv.org/abs/2405.01535**
- **Point of Contact:[email protected]**
# TL;DR
Prometheus 2 is an alternative of GPT-4 evaluation when doing fine-grained evaluation of an underlying LLM & a Reward model for Reinforcement Learning from Human Feedback (RLHF).

Prometheus 2 is a language model using [Mistral-Instruct](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) as a base model.
It is fine-tuned on 100K feedback within the [Feedback Collection](https://huggingface.co/datasets/prometheus-eval/Feedback-Collection) and 200K feedback within the [Preference Collection](https://huggingface.co/datasets/prometheus-eval/Preference-Collection).
It is also made by weight merging to support both absolute grading (direct assessment) and relative grading (pairwise ranking).
The surprising thing is that we find weight merging also improves performance on each format.
# Model Details
## Model Description
- **Model type:** Language model
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Related Models:** [All Prometheus Checkpoints](https://huggingface.co/models?search=prometheus-eval/Prometheus)
- **Resources for more information:**
- [Research paper](https://arxiv.org/abs/2405.01535)
- [GitHub Repo](https://github.com/prometheus-eval/prometheus-eval)
Prometheus is trained with two different sizes (7B and 8x7B).
You could check the 8x7B sized LM on [this page](https://huggingface.co/prometheus-eval/prometheus-2-8x7b-v2.0).
Also, check out our dataset as well on [this page](https://huggingface.co/datasets/prometheus-eval/Feedback-Collection) and [this page](https://huggingface.co/datasets/prometheus-eval/Preference-Collection).
## Prompt Format
We have made wrapper functions and classes to conveniently use Prometheus 2 at [our github repository](https://github.com/prometheus-eval/prometheus-eval).
We highly recommend you use it!
However, if you just want to use the model for your use case, please refer to the prompt format below.
Note that absolute grading and relative grading requires different prompt templates and system prompts.
### Absolute Grading (Direct Assessment)
Prometheus requires 4 components in the input: An instruction, a response to evaluate, a score rubric, and a reference answer. You could refer to the prompt format below.
You should fill in the instruction, response, reference answer, criteria description, and score description for score in range of 1 to 5.
Fix the components with \{text\} inside.
```
###Task Description:
An instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given.
1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general.
2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric.
3. The output format should look as follows: \"Feedback: (write a feedback for criteria) [RESULT] (an integer number between 1 and 5)\"
4. Please do not generate any other opening, closing, and explanations.
###The instruction to evaluate:
{orig_instruction}
###Response to evaluate:
{orig_response}
###Reference Answer (Score 5):
{orig_reference_answer}
###Score Rubrics:
[{orig_criteria}]
Score 1: {orig_score1_description}
Score 2: {orig_score2_description}
Score 3: {orig_score3_description}
Score 4: {orig_score4_description}
Score 5: {orig_score5_description}
###Feedback:
```
After this, you should apply the conversation template of Mistral (not applying it might lead to unexpected behaviors).
You can find the conversation class at this [link](https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py).
```
conv = get_conv_template("mistral")
conv.set_system_message("You are a fair judge assistant tasked with providing clear, objective feedback based on specific criteria, ensuring each assessment reflects the absolute standards set for performance.")
conv.append_message(conv.roles[0], dialogs['instruction'])
conv.append_message(conv.roles[1], None)
prompt = conv.get_prompt()
x = tokenizer(prompt,truncation=False)
```
As a result, a feedback and score decision will be generated, divided by a separating phrase ```[RESULT]```
### Relative Grading (Pairwise Ranking)
Prometheus requires 4 components in the input: An instruction, 2 responses to evaluate, a score rubric, and a reference answer. You could refer to the prompt format below.
You should fill in the instruction, 2 responses, reference answer, and criteria description.
Fix the components with \{text\} inside.
```
###Task Description:
An instruction (might include an Input inside it), a response to evaluate, and a score rubric representing a evaluation criteria are given.
1. Write a detailed feedback that assess the quality of two responses strictly based on the given score rubric, not evaluating in general.
2. After writing a feedback, choose a better response between Response A and Response B. You should refer to the score rubric.
3. The output format should look as follows: "Feedback: (write a feedback for criteria) [RESULT] (A or B)"
4. Please do not generate any other opening, closing, and explanations.
###Instruction:
{orig_instruction}
###Response A:
{orig_response_A}
###Response B:
{orig_response_B}
###Reference Answer:
{orig_reference_answer}
###Score Rubric:
{orig_criteria}
###Feedback:
```
After this, you should apply the conversation template of Mistral (not applying it might lead to unexpected behaviors).
You can find the conversation class at this [link](https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py).
```
conv = get_conv_template("mistral")
conv.set_system_message("You are a fair judge assistant assigned to deliver insightful feedback that compares individual performances, highlighting how each stands relative to others within the same cohort.")
conv.append_message(conv.roles[0], dialogs['instruction'])
conv.append_message(conv.roles[1], None)
prompt = conv.get_prompt()
x = tokenizer(prompt,truncation=False)
```
As a result, a feedback and score decision will be generated, divided by a separating phrase ```[RESULT]```
## License
Feedback Collection, Preference Collection, and Prometheus 2 are subject to OpenAI's Terms of Use for the generated data. If you suspect any violations, please reach out to us.
# Citation
If you find the following model helpful, please consider citing our paper!
**BibTeX:**
```bibtex
@misc{kim2023prometheus,
title={Prometheus: Inducing Fine-grained Evaluation Capability in Language Models},
author={Seungone Kim and Jamin Shin and Yejin Cho and Joel Jang and Shayne Longpre and Hwaran Lee and Sangdoo Yun and Seongjin Shin and Sungdong Kim and James Thorne and Minjoon Seo},
year={2023},
eprint={2310.08491},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@misc{kim2024prometheus,
title={Prometheus 2: An Open Source Language Model Specialized in Evaluating Other Language Models},
author={Seungone Kim and Juyoung Suk and Shayne Longpre and Bill Yuchen Lin and Jamin Shin and Sean Welleck and Graham Neubig and Moontae Lee and Kyungjae Lee and Minjoon Seo},
year={2024},
eprint={2405.01535},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Polyrific/stella_1.5B_model | Polyrific | "2025-01-14T10:40:23Z" | 50 | 0 | null | [
"pytorch",
"safetensors",
"qwen2",
"custom_code",
"license:apache-2.0",
"region:us"
] | null | "2025-01-14T10:32:49Z" | ---
license: apache-2.0
---
|
lesso04/6ed89a3e-2fe6-4035-b04d-95cbe7aadbd1 | lesso04 | "2025-03-16T11:09:41Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-135M",
"base_model:adapter:unsloth/SmolLM-135M",
"license:apache-2.0",
"region:us"
] | null | "2025-03-11T17:12:52Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-135M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6ed89a3e-2fe6-4035-b04d-95cbe7aadbd1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# 6ed89a3e-2fe6-4035-b04d-95cbe7aadbd1
This model is a fine-tuned version of [unsloth/SmolLM-135M](https://huggingface.co/unsloth/SmolLM-135M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5735
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000204
- train_batch_size: 4
- eval_batch_size: 4
- seed: 40
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 7000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0006 | 1 | 4.5532 |
| 2.2095 | 0.2830 | 500 | 2.2110 |
| 2.0664 | 0.5659 | 1000 | 2.0544 |
| 1.9159 | 0.8489 | 1500 | 1.9542 |
| 1.8265 | 1.1319 | 2000 | 1.8648 |
| 1.7592 | 1.4148 | 2500 | 1.7852 |
| 1.6823 | 1.6978 | 3000 | 1.7296 |
| 1.7181 | 1.9808 | 3500 | 1.6886 |
| 1.5498 | 2.2637 | 4000 | 1.6714 |
| 1.5843 | 2.5467 | 4500 | 1.6264 |
| 1.4633 | 2.8297 | 5000 | 1.5999 |
| 1.3976 | 3.1126 | 5500 | 1.5913 |
| 1.364 | 3.3956 | 6000 | 1.5887 |
| 1.4394 | 3.6786 | 6500 | 1.5821 |
| 1.4108 | 3.9615 | 7000 | 1.5735 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
xuykin/va-er | xuykin | "2024-01-25T19:13:16Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-01-25T19:10:01Z" | ---
license: creativeml-openrail-m
---
|
hkivancoral/hushem_40x_deit_base_adamax_00001_fold4 | hkivancoral | "2023-12-24T03:17:18Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-12-24T02:30:39Z" | ---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_base_adamax_00001_fold4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9523809523809523
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_base_adamax_00001_fold4
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2776
- Accuracy: 0.9524
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2891 | 1.0 | 219 | 0.3655 | 0.9048 |
| 0.0271 | 2.0 | 438 | 0.1551 | 0.9762 |
| 0.0059 | 3.0 | 657 | 0.1424 | 0.9762 |
| 0.0011 | 4.0 | 876 | 0.1398 | 0.9762 |
| 0.0007 | 5.0 | 1095 | 0.1496 | 0.9762 |
| 0.0005 | 6.0 | 1314 | 0.1466 | 0.9762 |
| 0.0003 | 7.0 | 1533 | 0.1409 | 0.9762 |
| 0.0002 | 8.0 | 1752 | 0.1498 | 0.9762 |
| 0.0002 | 9.0 | 1971 | 0.1564 | 0.9762 |
| 0.0001 | 10.0 | 2190 | 0.1656 | 0.9524 |
| 0.0001 | 11.0 | 2409 | 0.1807 | 0.9524 |
| 0.0001 | 12.0 | 2628 | 0.1735 | 0.9762 |
| 0.0001 | 13.0 | 2847 | 0.1728 | 0.9762 |
| 0.0001 | 14.0 | 3066 | 0.1752 | 0.9762 |
| 0.0 | 15.0 | 3285 | 0.1830 | 0.9524 |
| 0.0 | 16.0 | 3504 | 0.1909 | 0.9762 |
| 0.0 | 17.0 | 3723 | 0.1856 | 0.9762 |
| 0.0 | 18.0 | 3942 | 0.1931 | 0.9762 |
| 0.0 | 19.0 | 4161 | 0.1937 | 0.9762 |
| 0.0 | 20.0 | 4380 | 0.2012 | 0.9762 |
| 0.0 | 21.0 | 4599 | 0.1972 | 0.9762 |
| 0.0 | 22.0 | 4818 | 0.2059 | 0.9762 |
| 0.0 | 23.0 | 5037 | 0.2072 | 0.9762 |
| 0.0 | 24.0 | 5256 | 0.2139 | 0.9762 |
| 0.0 | 25.0 | 5475 | 0.2220 | 0.9524 |
| 0.0 | 26.0 | 5694 | 0.2242 | 0.9762 |
| 0.0 | 27.0 | 5913 | 0.2291 | 0.9524 |
| 0.0 | 28.0 | 6132 | 0.2302 | 0.9524 |
| 0.0 | 29.0 | 6351 | 0.2283 | 0.9524 |
| 0.0 | 30.0 | 6570 | 0.2384 | 0.9524 |
| 0.0 | 31.0 | 6789 | 0.2437 | 0.9524 |
| 0.0 | 32.0 | 7008 | 0.2389 | 0.9762 |
| 0.0 | 33.0 | 7227 | 0.2474 | 0.9524 |
| 0.0 | 34.0 | 7446 | 0.2474 | 0.9524 |
| 0.0 | 35.0 | 7665 | 0.2453 | 0.9524 |
| 0.0 | 36.0 | 7884 | 0.2498 | 0.9524 |
| 0.0 | 37.0 | 8103 | 0.2535 | 0.9524 |
| 0.0 | 38.0 | 8322 | 0.2499 | 0.9762 |
| 0.0 | 39.0 | 8541 | 0.2607 | 0.9524 |
| 0.0 | 40.0 | 8760 | 0.2656 | 0.9524 |
| 0.0 | 41.0 | 8979 | 0.2652 | 0.9524 |
| 0.0 | 42.0 | 9198 | 0.2609 | 0.9524 |
| 0.0 | 43.0 | 9417 | 0.2697 | 0.9524 |
| 0.0 | 44.0 | 9636 | 0.2693 | 0.9524 |
| 0.0 | 45.0 | 9855 | 0.2763 | 0.9524 |
| 0.0 | 46.0 | 10074 | 0.2779 | 0.9524 |
| 0.0 | 47.0 | 10293 | 0.2750 | 0.9524 |
| 0.0 | 48.0 | 10512 | 0.2730 | 0.9524 |
| 0.0 | 49.0 | 10731 | 0.2766 | 0.9524 |
| 0.0 | 50.0 | 10950 | 0.2776 | 0.9524 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
WangResearchLab/llava-mlan-llama2-7b | WangResearchLab | "2024-11-19T02:14:25Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llava_llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-19T01:07:59Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ShenaoZ/0.001_ablation_4iters_bs256_decalpha_iter_4 | ShenaoZ | "2024-04-23T07:48:41Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZ/0.001_ablation_4iters_bs256_decalpha_iter_3",
"base_model:finetune:ShenaoZ/0.001_ablation_4iters_bs256_decalpha_iter_3",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-23T06:48:23Z" | ---
license: mit
base_model: ShenaoZ/0.001_ablation_4iters_bs256_decalpha_iter_3
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- updated
- original
model-index:
- name: 0.001_ablation_4iters_bs256_decalpha_iter_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_ablation_4iters_bs256_decalpha_iter_4
This model is a fine-tuned version of [ShenaoZ/0.001_ablation_4iters_bs256_decalpha_iter_3](https://huggingface.co/ShenaoZ/0.001_ablation_4iters_bs256_decalpha_iter_3) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
SyedAunZaidi/wav2vec2-large-xls-r-300m-urdu-colab | SyedAunZaidi | "2023-07-22T23:10:22Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:audiofolder",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-07-20T19:40:49Z" | ---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
datasets:
- audiofolder
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-300m-urdu-colab
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: audiofolder
type: audiofolder
config: default
split: train
args: default
metrics:
- name: Wer
type: wer
value: 0.8209424083769633
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-urdu-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: inf
- Wer: 0.8209
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.0012 | 3.09 | 400 | inf | 0.8209 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 1.18.3
- Tokenizers 0.13.3
|
jonatasgrosman/exp_w2v2t_ja_hubert_s334 | jonatasgrosman | "2022-07-08T16:31:52Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"hubert",
"automatic-speech-recognition",
"ja",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-07-08T16:31:29Z" | ---
language:
- ja
license: apache-2.0
tags:
- automatic-speech-recognition
- ja
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_ja_hubert_s334
Fine-tuned [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
clue/xlnet_chinese_large | clue | "2020-12-11T21:36:08Z" | 4 | 2 | transformers | [
"transformers",
"pytorch",
"xlnet",
"zh",
"endpoints_compatible",
"region:us"
] | null | "2022-03-02T23:29:05Z" | ---
language: zh
---
## xlnet_chinese_large
### Overview
**Language model:** xlnet-large
**Model size:** 1.3G
**Language:** Chinese
**Training data:** [CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020)
**Eval data:** [CLUE dataset](https://github.com/CLUEbenchmark/CLUE)
### Results
For results on downstream tasks like text classification, please refer to [this repository](https://github.com/CLUEbenchmark/CLUE).
### Usage
```
import torch
from transformers import XLNetTokenizer,XLNetModel
tokenizer = XLNetTokenizer.from_pretrained("clue/xlnet_chinese_large")
xlnet = XLNetModel.from_pretrained("clue/xlnet_chinese_large")
```
### About CLUE benchmark
Organization of Language Understanding Evaluation benchmark for Chinese: tasks & datasets, baselines, pre-trained Chinese models, corpus and leaderboard.
Github: https://github.com/CLUEbenchmark
Website: https://www.cluebenchmarks.com/
|
Enpas/small-trsc-3 | Enpas | "2024-06-04T19:37:34Z" | 15 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-03T19:18:11Z" | ---
license: apache-2.0
base_model: openai/whisper-base
tags:
- generated_from_trainer
model-index:
- name: small-Cotrsc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-Cotrsc
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0487
- eval_wer: 39.6655
- eval_runtime: 516.4929
- eval_samples_per_second: 0.67
- eval_steps_per_second: 0.085
- epoch: 0.4231
- step: 1200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1200
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
kiwikiw/o1_13 | kiwikiw | "2025-02-27T06:01:48Z" | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | "2025-02-27T06:01:47Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
KasunAbeyweera/fine_tuned_llama3_sl_constitution | KasunAbeyweera | "2025-03-21T04:06:29Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-03-21T02:56:14Z" | ---
base_model: unsloth/llama-3.1-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** KasunAbeyweera
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mhylle/gemma-reasoning-genius | mhylle | "2025-03-14T16:24:20Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2025-03-14T16:17:26Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MRNH/Feedformer-ett-hourly | MRNH | "2024-03-11T09:57:43Z" | 31 | 0 | transformers | [
"transformers",
"pytorch",
"dataset:yjseo/etth1_for_llm2",
"dataset:yjseo/etth1_for_llm",
"endpoints_compatible",
"region:us"
] | null | "2023-12-31T00:28:48Z" | ---
datasets:
- yjseo/etth1_for_llm2
- yjseo/etth1_for_llm
metrics:
- mse
---
This script uses the Hugging Face model 'MRNH/Feedformer-ett-hourly' to perform some task on the ETT-small dataset.
Model: 'MRNH/Feedformer-ett-hourly'
- This model is a transformer-based model designed for some task. (Replace 'some task' with the actual task the model is designed for)
Dataset: 'ETT-small'
- This dataset contains... (Replace with a brief description of the dataset)
The script performs the following steps:
1. Load the 'MRNH/Feedformer-ett-hourly' model from the Hugging Face model hub.
2. Load the 'ETT-small' dataset.
3. Preprocess the dataset as required by the model.
4. Feed the preprocessed data into the model and collect the outputs.
5. Postprocess the outputs and save the results.
Example:
from transformers import AutoModel
model = AutoModel.from_pretrained('MRNH/Feedformer-ett-hourly')
For the model selection experiments llok at:
https://wandb.ai/gec023/baseline-forecasting |
TOMFORD79/bittensor_com2.13 | TOMFORD79 | "2025-03-31T08:03:15Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-31T07:11:35Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
genki10/Trial3BERT_AugV8_k1_task1_organization_sp030_lw010_fold3 | genki10 | "2025-04-05T23:19:22Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-04-05T23:08:22Z" | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: Trial3BERT_AugV8_k1_task1_organization_sp030_lw010_fold3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Trial3BERT_AugV8_k1_task1_organization_sp030_lw010_fold3
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9226
- Qwk: 0.4499
- Mse: 0.9237
- Rmse: 0.9611
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:------:|
| No log | 1.0 | 2 | 11.2303 | 0.0210 | 11.2284 | 3.3509 |
| No log | 2.0 | 4 | 10.5705 | 0.0 | 10.5686 | 3.2509 |
| No log | 3.0 | 6 | 9.7782 | 0.0 | 9.7764 | 3.1267 |
| No log | 4.0 | 8 | 8.4493 | 0.0 | 8.4477 | 2.9065 |
| No log | 5.0 | 10 | 7.4830 | 0.0 | 7.4814 | 2.7352 |
| No log | 6.0 | 12 | 6.9036 | 0.0 | 6.9020 | 2.6272 |
| No log | 7.0 | 14 | 6.0098 | 0.0120 | 6.0083 | 2.4512 |
| No log | 8.0 | 16 | 5.0390 | 0.0 | 5.0378 | 2.2445 |
| No log | 9.0 | 18 | 4.1175 | 0.0 | 4.1165 | 2.0289 |
| No log | 10.0 | 20 | 3.2194 | 0.0 | 3.2184 | 1.7940 |
| No log | 11.0 | 22 | 2.6332 | 0.0 | 2.6323 | 1.6224 |
| No log | 12.0 | 24 | 2.1755 | 0.1121 | 2.1747 | 1.4747 |
| No log | 13.0 | 26 | 1.8914 | 0.0193 | 1.8908 | 1.3750 |
| No log | 14.0 | 28 | 1.5646 | 0.0166 | 1.5641 | 1.2506 |
| No log | 15.0 | 30 | 1.3367 | 0.0166 | 1.3361 | 1.1559 |
| No log | 16.0 | 32 | 1.0585 | 0.0102 | 1.0581 | 1.0286 |
| No log | 17.0 | 34 | 0.9670 | 0.0126 | 0.9665 | 0.9831 |
| No log | 18.0 | 36 | 0.8270 | 0.3290 | 0.8267 | 0.9092 |
| No log | 19.0 | 38 | 0.9405 | 0.1974 | 0.9405 | 0.9698 |
| No log | 20.0 | 40 | 0.8900 | 0.2827 | 0.8900 | 0.9434 |
| No log | 21.0 | 42 | 0.7356 | 0.3921 | 0.7356 | 0.8577 |
| No log | 22.0 | 44 | 0.7692 | 0.4411 | 0.7693 | 0.8771 |
| No log | 23.0 | 46 | 1.0328 | 0.2886 | 1.0331 | 1.0164 |
| No log | 24.0 | 48 | 1.1399 | 0.3031 | 1.1405 | 1.0679 |
| No log | 25.0 | 50 | 1.4264 | 0.2675 | 1.4273 | 1.1947 |
| No log | 26.0 | 52 | 1.4741 | 0.2970 | 1.4751 | 1.2145 |
| No log | 27.0 | 54 | 2.5024 | 0.1517 | 2.5033 | 1.5822 |
| No log | 28.0 | 56 | 2.2729 | 0.1961 | 2.2740 | 1.5080 |
| No log | 29.0 | 58 | 0.7743 | 0.5097 | 0.7750 | 0.8804 |
| No log | 30.0 | 60 | 0.6879 | 0.5176 | 0.6885 | 0.8298 |
| No log | 31.0 | 62 | 1.0227 | 0.3924 | 1.0235 | 1.0117 |
| No log | 32.0 | 64 | 1.5795 | 0.3009 | 1.5804 | 1.2571 |
| No log | 33.0 | 66 | 0.9221 | 0.4563 | 0.9229 | 0.9607 |
| No log | 34.0 | 68 | 0.6679 | 0.5484 | 0.6686 | 0.8177 |
| No log | 35.0 | 70 | 0.7139 | 0.4896 | 0.7146 | 0.8453 |
| No log | 36.0 | 72 | 1.1800 | 0.3673 | 1.1809 | 1.0867 |
| No log | 37.0 | 74 | 0.9786 | 0.4015 | 0.9794 | 0.9897 |
| No log | 38.0 | 76 | 0.8204 | 0.4937 | 0.8213 | 0.9063 |
| No log | 39.0 | 78 | 1.1347 | 0.3801 | 1.1357 | 1.0657 |
| No log | 40.0 | 80 | 1.2420 | 0.3285 | 1.2430 | 1.1149 |
| No log | 41.0 | 82 | 0.9080 | 0.4526 | 0.9090 | 0.9534 |
| No log | 42.0 | 84 | 1.0439 | 0.3822 | 1.0450 | 1.0222 |
| No log | 43.0 | 86 | 1.1436 | 0.3552 | 1.1447 | 1.0699 |
| No log | 44.0 | 88 | 0.8168 | 0.4803 | 0.8177 | 0.9043 |
| No log | 45.0 | 90 | 0.9004 | 0.4630 | 0.9014 | 0.9494 |
| No log | 46.0 | 92 | 1.5484 | 0.2787 | 1.5496 | 1.2448 |
| No log | 47.0 | 94 | 1.5001 | 0.2906 | 1.5013 | 1.2253 |
| No log | 48.0 | 96 | 0.9377 | 0.4615 | 0.9388 | 0.9689 |
| No log | 49.0 | 98 | 0.9226 | 0.4499 | 0.9237 | 0.9611 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
juliajoanna/sdxl-flintstones_finetuning_3 | juliajoanna | "2023-11-04T04:02:58Z" | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"base_model:juliajoanna/sdxl-flintstones_finetuning_1",
"base_model:finetune:juliajoanna/sdxl-flintstones_finetuning_1",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2023-11-02T14:09:23Z" |
---
license: creativeml-openrail-m
base_model: juliajoanna/sdxl-flintstones_finetuning_1
dataset: None
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
inference: true
---
# Text-to-image finetuning - juliajoanna/sdxl-flintstones_finetuning_3
This pipeline was finetuned from **juliajoanna/sdxl-flintstones_finetuning_1** on the **None** dataset. Below are some example images generated with the finetuned pipeline using the following prompt: Fred is driving a car:




Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
avankumar/Battery_QandA | avankumar | "2024-12-12T07:01:57Z" | 36 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-12T06:59:00Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Nicknotname/wnutNer | Nicknotname | "2024-06-25T18:08:02Z" | 4 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-06-25T18:00:41Z" | ---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: Nicknotname/wnutNer
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Nicknotname/wnutNer
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1137
- Validation Loss: 0.2552
- Train Precision: 0.5898
- Train Recall: 0.4163
- Train F1: 0.4881
- Train Accuracy: 0.9467
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 636, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 0.3393 | 0.3197 | 0.3455 | 0.0455 | 0.0803 | 0.9248 | 0 |
| 0.1550 | 0.2591 | 0.5387 | 0.3744 | 0.4418 | 0.9433 | 1 |
| 0.1137 | 0.2552 | 0.5898 | 0.4163 | 0.4881 | 0.9467 | 2 |
### Framework versions
- Transformers 4.41.2
- TensorFlow 2.15.0
- Datasets 2.19.2
- Tokenizers 0.19.1
|
afrideva/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v1-GGUF | afrideva | "2023-11-08T16:29:04Z" | 84 | 1 | null | [
"gguf",
"ggml",
"quantized",
"q2_k",
"q3_k_m",
"q4_k_m",
"q5_k_m",
"q6_k",
"q8_0",
"text-generation",
"pt",
"en",
"license:mit",
"region:us"
] | text-generation | "2023-11-08T16:25:56Z" | ---
base_model: cnmoro/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v1
inference: false
language:
- pt
- en
license: mit
model_creator: cnmoro
model_name: TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v1
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
---
# cnmoro/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v1-GGUF
Quantized GGUF model files for [TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v1](https://huggingface.co/cnmoro/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v1) from [cnmoro](https://huggingface.co/cnmoro)
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v1.q2_k.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v1-GGUF/resolve/main/tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v1.q2_k.gguf) | q2_k | 482.14 MB |
| [tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v1.q3_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v1-GGUF/resolve/main/tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v1.q3_k_m.gguf) | q3_k_m | 549.85 MB |
| [tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v1.q4_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v1-GGUF/resolve/main/tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v1.q4_k_m.gguf) | q4_k_m | 667.81 MB |
| [tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v1.q5_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v1-GGUF/resolve/main/tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v1.q5_k_m.gguf) | q5_k_m | 782.04 MB |
| [tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v1.q6_k.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v1-GGUF/resolve/main/tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v1.q6_k.gguf) | q6_k | 903.41 MB |
| [tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v1.q8_0.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v1-GGUF/resolve/main/tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v1.q8_0.gguf) | q8_0 | 1.17 GB |
## Original Model Card:
Finetuned version of PY007/TinyLlama-1.1B-intermediate-step-715k-1.5T, on a Portuguese instruct dataset, using axolotl.
This is a work in progress, final version will be v3 or v4.
Prompt format:
f"Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:\n" |
gabrielok8/santisalvatierra | gabrielok8 | "2025-02-13T15:31:19Z" | 0 | 0 | null | [
"license:other",
"region:us"
] | null | "2025-02-13T14:51:11Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
damapika/distilbert-base-uncased_mod | damapika | "2023-05-19T13:08:24Z" | 26 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:quoref",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2023-04-18T14:53:11Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- quoref
model-index:
- name: distilbert-base-uncased_mod
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_mod
This model is a fine-tuned version of [damapika/distilbert-base-uncased_mod](https://huggingface.co/damapika/distilbert-base-uncased_mod) on the quoref dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0147
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6873 | 1.0 | 1213 | 1.6969 |
| 1.1652 | 2.0 | 2426 | 1.8045 |
| 0.7953 | 3.0 | 3639 | 2.0147 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
mradermacher/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-GGUF | mradermacher | "2024-10-03T00:36:07Z" | 204 | 3 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Devarui379/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated",
"base_model:quantized:Devarui379/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-10-03T00:24:33Z" | ---
base_model: Devarui379/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated
language:
- en
library_name: transformers
license: cc-by-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Devarui379/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-GGUF/resolve/main/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-GGUF/resolve/main/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated.IQ3_XS.gguf) | IQ3_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-GGUF/resolve/main/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated.IQ3_S.gguf) | IQ3_S | 1.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-GGUF/resolve/main/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-GGUF/resolve/main/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated.IQ3_M.gguf) | IQ3_M | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-GGUF/resolve/main/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-GGUF/resolve/main/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated.Q3_K_L.gguf) | Q3_K_L | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-GGUF/resolve/main/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-GGUF/resolve/main/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated.Q4_K_S.gguf) | Q4_K_S | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-GGUF/resolve/main/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated.Q4_K_M.gguf) | Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-GGUF/resolve/main/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated.Q5_K_S.gguf) | Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-GGUF/resolve/main/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated.Q5_K_M.gguf) | Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-GGUF/resolve/main/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated.Q6_K.gguf) | Q6_K | 2.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-GGUF/resolve/main/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated.Q8_0.gguf) | Q8_0 | 3.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-GGUF/resolve/main/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated.f16.gguf) | f16 | 6.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/phi-2-OpenHermes-2.5-v2-i1-GGUF | mradermacher | "2024-12-21T10:00:08Z" | 7 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:teknium/OpenHermes-2.5",
"base_model:g-ronimo/phi-2-OpenHermes-2.5-v2",
"base_model:quantized:g-ronimo/phi-2-OpenHermes-2.5-v2",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2024-12-21T09:20:58Z" | ---
base_model: g-ronimo/phi-2-OpenHermes-2.5-v2
datasets:
- teknium/OpenHermes-2.5
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/g-ronimo/phi-2-OpenHermes-2.5-v2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/phi-2-OpenHermes-2.5-v2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/phi-2-OpenHermes-2.5-v2-i1-GGUF/resolve/main/phi-2-OpenHermes-2.5-v2.i1-IQ1_S.gguf) | i1-IQ1_S | 0.8 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/phi-2-OpenHermes-2.5-v2-i1-GGUF/resolve/main/phi-2-OpenHermes-2.5-v2.i1-IQ1_M.gguf) | i1-IQ1_M | 0.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/phi-2-OpenHermes-2.5-v2-i1-GGUF/resolve/main/phi-2-OpenHermes-2.5-v2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/phi-2-OpenHermes-2.5-v2-i1-GGUF/resolve/main/phi-2-OpenHermes-2.5-v2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/phi-2-OpenHermes-2.5-v2-i1-GGUF/resolve/main/phi-2-OpenHermes-2.5-v2.i1-IQ2_S.gguf) | i1-IQ2_S | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/phi-2-OpenHermes-2.5-v2-i1-GGUF/resolve/main/phi-2-OpenHermes-2.5-v2.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/phi-2-OpenHermes-2.5-v2-i1-GGUF/resolve/main/phi-2-OpenHermes-2.5-v2.i1-IQ2_M.gguf) | i1-IQ2_M | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/phi-2-OpenHermes-2.5-v2-i1-GGUF/resolve/main/phi-2-OpenHermes-2.5-v2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/phi-2-OpenHermes-2.5-v2-i1-GGUF/resolve/main/phi-2-OpenHermes-2.5-v2.i1-Q2_K.gguf) | i1-Q2_K | 1.2 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/phi-2-OpenHermes-2.5-v2-i1-GGUF/resolve/main/phi-2-OpenHermes-2.5-v2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/phi-2-OpenHermes-2.5-v2-i1-GGUF/resolve/main/phi-2-OpenHermes-2.5-v2.i1-IQ3_S.gguf) | i1-IQ3_S | 1.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/phi-2-OpenHermes-2.5-v2-i1-GGUF/resolve/main/phi-2-OpenHermes-2.5-v2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/phi-2-OpenHermes-2.5-v2-i1-GGUF/resolve/main/phi-2-OpenHermes-2.5-v2.i1-IQ3_M.gguf) | i1-IQ3_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/phi-2-OpenHermes-2.5-v2-i1-GGUF/resolve/main/phi-2-OpenHermes-2.5-v2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/phi-2-OpenHermes-2.5-v2-i1-GGUF/resolve/main/phi-2-OpenHermes-2.5-v2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/phi-2-OpenHermes-2.5-v2-i1-GGUF/resolve/main/phi-2-OpenHermes-2.5-v2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/phi-2-OpenHermes-2.5-v2-i1-GGUF/resolve/main/phi-2-OpenHermes-2.5-v2.i1-Q4_0.gguf) | i1-Q4_0 | 1.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/phi-2-OpenHermes-2.5-v2-i1-GGUF/resolve/main/phi-2-OpenHermes-2.5-v2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/phi-2-OpenHermes-2.5-v2-i1-GGUF/resolve/main/phi-2-OpenHermes-2.5-v2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/phi-2-OpenHermes-2.5-v2-i1-GGUF/resolve/main/phi-2-OpenHermes-2.5-v2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/phi-2-OpenHermes-2.5-v2-i1-GGUF/resolve/main/phi-2-OpenHermes-2.5-v2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/phi-2-OpenHermes-2.5-v2-i1-GGUF/resolve/main/phi-2-OpenHermes-2.5-v2.i1-Q6_K.gguf) | i1-Q6_K | 2.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
thakkkkkk/1d88458b-3503-4d9a-ae1d-0d49534c465a | thakkkkkk | "2025-01-14T04:03:39Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-360M",
"base_model:adapter:unsloth/SmolLM-360M",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-14T03:48:17Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-360M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1d88458b-3503-4d9a-ae1d-0d49534c465a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-360M
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 7e5a870f23ac7879_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7e5a870f23ac7879_train_data.json
type:
field_input: Query
field_instruction: Instruction
field_output: Document
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thakkkkkk/1d88458b-3503-4d9a-ae1d-0d49534c465a
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/7e5a870f23ac7879_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 270416a6-5611-4748-a460-38254426a2bb
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 270416a6-5611-4748-a460-38254426a2bb
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 1d88458b-3503-4d9a-ae1d-0d49534c465a
This model is a fine-tuned version of [unsloth/SmolLM-360M](https://huggingface.co/unsloth/SmolLM-360M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.2402 | 0.0336 | 200 | 2.2889 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
artbreguez/ML-Agents-Pyramids | artbreguez | "2023-03-27T16:13:31Z" | 14 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | "2023-03-27T16:11:33Z" | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: artbreguez/ML-Agents-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
DevQuasar/huihui-ai.EXAONE-3.5-32B-Instruct-abliterated-GGUF | DevQuasar | "2025-02-01T23:13:30Z" | 48 | 0 | null | [
"gguf",
"text-generation",
"base_model:huihui-ai/EXAONE-3.5-32B-Instruct-abliterated",
"base_model:quantized:huihui-ai/EXAONE-3.5-32B-Instruct-abliterated",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-12-22T07:00:51Z" | ---
base_model:
- huihui-ai/EXAONE-3.5-32B-Instruct-abliterated
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
'Make knowledge free for everyone'
Quantized version of: [huihui-ai/EXAONE-3.5-32B-Instruct-abliterated](https://huggingface.co/huihui-ai/EXAONE-3.5-32B-Instruct-abliterated)
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
lunarsylph/gemmacell_v7 | lunarsylph | "2024-03-23T17:38:17Z" | 137 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-11T01:34:03Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Litzy619/PHI30515HMA1H | Litzy619 | "2024-05-16T19:32:59Z" | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:finetune:microsoft/Phi-3-mini-4k-instruct",
"license:mit",
"region:us"
] | null | "2024-05-16T06:53:48Z" | ---
license: mit
base_model: microsoft/Phi-3-mini-4k-instruct
tags:
- generated_from_trainer
model-index:
- name: PHI30515HMA1H
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PHI30515HMA1H
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0747
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 80
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.2832 | 0.09 | 10 | 2.7337 |
| 1.7648 | 0.18 | 20 | 0.3745 |
| 0.3839 | 0.27 | 30 | 0.2589 |
| 0.3285 | 0.36 | 40 | 0.2520 |
| 0.3202 | 0.45 | 50 | 0.2229 |
| 0.6502 | 0.54 | 60 | 0.2693 |
| 0.3048 | 0.63 | 70 | 0.1647 |
| 0.2068 | 0.73 | 80 | 0.1318 |
| 0.1411 | 0.82 | 90 | 0.1621 |
| 0.1775 | 0.91 | 100 | 0.0975 |
| 0.1835 | 1.0 | 110 | 0.0954 |
| 0.1014 | 1.09 | 120 | 0.0876 |
| 0.1148 | 1.18 | 130 | 0.0976 |
| 0.1506 | 1.27 | 140 | 0.0760 |
| 0.128 | 1.36 | 150 | 0.0750 |
| 0.0883 | 1.45 | 160 | 0.0736 |
| 0.0913 | 1.54 | 170 | 0.0692 |
| 0.0795 | 1.63 | 180 | 0.0681 |
| 0.0927 | 1.72 | 190 | 0.0669 |
| 0.087 | 1.81 | 200 | 0.0667 |
| 0.0606 | 1.9 | 210 | 0.0682 |
| 0.0627 | 1.99 | 220 | 0.0679 |
| 0.0441 | 2.08 | 230 | 0.0705 |
| 0.0543 | 2.18 | 240 | 0.0813 |
| 0.0413 | 2.27 | 250 | 0.0839 |
| 0.0414 | 2.36 | 260 | 0.0775 |
| 0.0462 | 2.45 | 270 | 0.0756 |
| 0.0411 | 2.54 | 280 | 0.0763 |
| 0.0392 | 2.63 | 290 | 0.0768 |
| 0.0407 | 2.72 | 300 | 0.0771 |
| 0.0508 | 2.81 | 310 | 0.0755 |
| 0.0577 | 2.9 | 320 | 0.0746 |
| 0.0431 | 2.99 | 330 | 0.0747 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.0
|
nickmiller795/dqn-SpaceInvadersNoFrameskip-v4 | nickmiller795 | "2024-02-04T06:47:25Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-02-04T06:46:51Z" | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 589.00 +/- 204.58
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga nickmiller795 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga nickmiller795 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga nickmiller795
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
SynapseQAI/T5-base-wmt14 | SynapseQAI | "2024-10-21T06:00:43Z" | 5 | 0 | null | [
"safetensors",
"t5",
"fr",
"en",
"dataset:wmt/wmt14",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:mit",
"region:us"
] | null | "2024-10-16T08:28:00Z" | ---
license: mit
datasets:
- wmt/wmt14
language:
- fr
- en
base_model:
- google-t5/t5-base
---
This model was finetuned using 50 K French English sentence pairs on WMT14 Fr En dataset.
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
# Load the pre-trained model and tokenizer
model_name = "SynapseQAI/T5-base-wmt14"
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
# Function to translate using beam search (default strategy)
def translate(sentence):
# Prepare the input for the model
input_text = f": {sentence}"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
# Generate translation using beam search
outputs = model.generate(input_ids, num_beams=3, max_length=50, early_stopping=True)
# Decode the generated translation
translation = tokenizer.decode(outputs[0], skip_special_tokens=True)
return translation
# French sentences from easy to advanced
sentences = [
"Le soleil se lève à l'est et se couche à l'ouest.",
"Les scientifiques travaillent dur pour trouver un remède.",
"La capitale de la France est Paris.",
"Je voudrais un café s'il vous plaît.",
"Il fait beau aujourd'hui.",
"J'aime lire des livres et regarder des films pendant mon temps libre.",
"Si j'avais su que tu venais, j'aurais préparé quelque chose de spécial pour le dîner.",
"Même si les avancées technologiques apportent de nombreux avantages, elles posent également des défis éthiques considérables qu'il nous faut relever."
]
# Translate each sentence and print the best translation
for sentence in sentences:
translated_sentence = translate(sentence)
print(f"French: {sentence}\nEnglish: {translated_sentence}\n")
|
jonatasgrosman/exp_w2v2t_pl_hubert_s6 | jonatasgrosman | "2022-07-10T18:59:05Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"hubert",
"automatic-speech-recognition",
"pl",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-07-10T18:58:41Z" | ---
language:
- pl
license: apache-2.0
tags:
- automatic-speech-recognition
- pl
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pl_hubert_s6
Fine-tuned [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) for speech recognition using the train split of [Common Voice 7.0 (pl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
davidschulte/ESM_Divyanshu__indicxnli_te | davidschulte | "2025-03-26T15:20:24Z" | 18 | 0 | null | [
"safetensors",
"embedding_space_map",
"BaseLM:bert-base-multilingual-uncased",
"dataset:Divyanshu/indicxnli",
"base_model:google-bert/bert-base-multilingual-uncased",
"base_model:finetune:google-bert/bert-base-multilingual-uncased",
"license:apache-2.0",
"region:us"
] | null | "2024-12-08T14:38:20Z" | ---
base_model: bert-base-multilingual-uncased
datasets:
- Divyanshu/indicxnli
license: apache-2.0
tags:
- embedding_space_map
- BaseLM:bert-base-multilingual-uncased
---
# ESM Divyanshu/indicxnli
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
ESM
- **Developed by:** David Schulte
- **Model type:** ESM
- **Base Model:** bert-base-multilingual-uncased
- **Intermediate Task:** Divyanshu/indicxnli
- **ESM architecture:** linear
- **ESM embedding dimension:** 768
- **Language(s) (NLP):** [More Information Needed]
- **License:** Apache-2.0 license
- **ESM version:** 0.1.0
## Training Details
### Intermediate Task
- **Task ID:** Divyanshu/indicxnli
- **Subset [optional]:** te
- **Text Column:** ['premise', 'hypothesis']
- **Label Column:** label
- **Dataset Split:** train
- **Sample size [optional]:** 10000
- **Sample seed [optional]:** 42
### Training Procedure [optional]
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Language Model Training Hyperparameters [optional]
- **Epochs:** 3
- **Batch size:** 32
- **Learning rate:** 2e-05
- **Weight Decay:** 0.01
- **Optimizer**: AdamW
### ESM Training Hyperparameters [optional]
- **Epochs:** 10
- **Batch size:** 32
- **Learning rate:** 0.001
- **Weight Decay:** 0.01
- **Optimizer**: AdamW
### Additional trainiung details [optional]
## Model evaluation
### Evaluation of fine-tuned language model [optional]
### Evaluation of ESM [optional]
MSE:
### Additional evaluation details [optional]
## What are Embedding Space Maps used for?
Embedding Space Maps are a part of ESM-LogME, a efficient method for finding intermediate datasets for transfer learning. There are two reasons to use ESM-LogME:
### You don't have enough training data for your problem
If you don't have a enough training data for your problem, just use ESM-LogME to find more.
You can supplement model training by including publicly available datasets in the training process.
1. Fine-tune a language model on suitable intermediate dataset.
2. Fine-tune the resulting model on your target dataset.
This workflow is called intermediate task transfer learning and it can significantly improve the target performance.
But what is a suitable dataset for your problem? ESM-LogME enable you to quickly rank thousands of datasets on the Hugging Face Hub by how well they are exptected to transfer to your target task.
### You want to find similar datasets to your target dataset
Using ESM-LogME can be used like search engine on the Hugging Face Hub. You can find similar tasks to your target task without having to rely on heuristics. ESM-LogME estimates how language models fine-tuned on each intermediate task would benefinit your target task. This quantitative approach combines the effects of domain similarity and task similarity.
## How can I use ESM-LogME / ESMs?
[](https://pypi.org/project/hf-dataset-selector)
We release **hf-dataset-selector**, a Python package for intermediate task selection using Embedding Space Maps.
**hf-dataset-selector** fetches ESMs for a given language model and uses it to find the best dataset for applying intermediate training to the target task. ESMs are found by their tags on the Huggingface Hub.
```python
from hfselect import Dataset, compute_task_ranking
# Load target dataset from the Hugging Face Hub
dataset = Dataset.from_hugging_face(
name="stanfordnlp/imdb",
split="train",
text_col="text",
label_col="label",
is_regression=False,
num_examples=1000,
seed=42
)
# Fetch ESMs and rank tasks
task_ranking = compute_task_ranking(
dataset=dataset,
model_name="bert-base-multilingual-uncased"
)
# Display top 5 recommendations
print(task_ranking[:5])
```
```python
1. davanstrien/test_imdb_embedd2 Score: -0.618529
2. davanstrien/test_imdb_embedd Score: -0.618644
3. davanstrien/test1 Score: -0.619334
4. stanfordnlp/imdb Score: -0.619454
5. stanfordnlp/sst Score: -0.62995
```
| Rank | Task ID | Task Subset | Text Column | Label Column | Task Split | Num Examples | ESM Architecture | Score |
|-------:|:------------------------------|:----------------|:--------------|:---------------|:-------------|---------------:|:-------------------|----------:|
| 1 | davanstrien/test_imdb_embedd2 | default | text | label | train | 10000 | linear | -0.618529 |
| 2 | davanstrien/test_imdb_embedd | default | text | label | train | 10000 | linear | -0.618644 |
| 3 | davanstrien/test1 | default | text | label | train | 10000 | linear | -0.619334 |
| 4 | stanfordnlp/imdb | plain_text | text | label | train | 10000 | linear | -0.619454 |
| 5 | stanfordnlp/sst | dictionary | phrase | label | dictionary | 10000 | linear | -0.62995 |
| 6 | stanfordnlp/sst | default | sentence | label | train | 8544 | linear | -0.63312 |
| 7 | kuroneko5943/snap21 | CDs_and_Vinyl_5 | sentence | label | train | 6974 | linear | -0.634365 |
| 8 | kuroneko5943/snap21 | Video_Games_5 | sentence | label | train | 6997 | linear | -0.638787 |
| 9 | kuroneko5943/snap21 | Movies_and_TV_5 | sentence | label | train | 6989 | linear | -0.639068 |
| 10 | fancyzhx/amazon_polarity | amazon_polarity | content | label | train | 10000 | linear | -0.639718 |
For more information on how to use ESMs please have a look at the [official Github repository](https://github.com/davidschulte/hf-dataset-selector). We provide documentation further documentation and tutorials for finding intermediate datasets and training your own ESMs.
## How do Embedding Space Maps work?
<!-- This section describes the evaluation protocols and provides the results. -->
Embedding Space Maps (ESMs) are neural networks that approximate the effect of fine-tuning a language model on a task. They can be used to quickly transform embeddings from a base model to approximate how a fine-tuned model would embed the the input text.
ESMs can be used for intermediate task selection with the ESM-LogME workflow.
## How can I use Embedding Space Maps for Intermediate Task Selection?
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
If you are using this Embedding Space Maps, please cite our [paper](https://aclanthology.org/2024.emnlp-main.529/).
**BibTeX:**
```
@inproceedings{schulte-etal-2024-less,
title = "Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning",
author = "Schulte, David and
Hamborg, Felix and
Akbik, Alan",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.529/",
doi = "10.18653/v1/2024.emnlp-main.529",
pages = "9431--9442",
abstract = "Intermediate task transfer learning can greatly improve model performance. If, for example, one has little training data for emotion detection, first fine-tuning a language model on a sentiment classification dataset may improve performance strongly. But which task to choose for transfer learning? Prior methods producing useful task rankings are infeasible for large source pools, as they require forward passes through all source language models. We overcome this by introducing Embedding Space Maps (ESMs), light-weight neural networks that approximate the effect of fine-tuning a language model. We conduct the largest study on NLP task transferability and task selection with 12k source-target pairs. We find that applying ESMs on a prior method reduces execution time and disk space usage by factors of 10 and 278, respectively, while retaining high selection performance (avg. regret@5 score of 2.95)."
}
```
**APA:**
```
Schulte, D., Hamborg, F., & Akbik, A. (2024, November). Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (pp. 9431-9442).
```
## Additional Information
|
lengxingxin/phi3.5-lora-1000-dc-cicids2017 | lengxingxin | "2024-08-21T14:53:17Z" | 54 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-08-21T14:50:18Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MultiBertGunjanPatrick/multiberts-seed-3-300k | MultiBertGunjanPatrick | "2021-10-04T05:07:18Z" | 1 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2022-03-02T23:29:04Z" | ---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-3
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 3 Checkpoint 300k (uncased)
Seed 3 intermediate checkpoint 300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-300k')
model = BertModel.from_pretrained("multiberts-seed-3-300k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
tarabukinivan/c00502ae-21ed-42f0-9a13-7b8450565040 | tarabukinivan | "2025-01-28T00:40:22Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:01-ai/Yi-1.5-9B-Chat-16K",
"base_model:adapter:01-ai/Yi-1.5-9B-Chat-16K",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-27T23:59:28Z" | ---
library_name: peft
license: apache-2.0
base_model: 01-ai/Yi-1.5-9B-Chat-16K
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c00502ae-21ed-42f0-9a13-7b8450565040
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: 01-ai/Yi-1.5-9B-Chat-16K
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 656aeb34f8bb5745_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/656aeb34f8bb5745_train_data.json
type:
field_instruction: title
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: tarabukinivan/c00502ae-21ed-42f0-9a13-7b8450565040
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/656aeb34f8bb5745_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 15
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ba15d1f6-1b00-495f-b909-7674b8afcf2f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ba15d1f6-1b00-495f-b909-7674b8afcf2f
warmup_steps: 15
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# c00502ae-21ed-42f0-9a13-7b8450565040
This model is a fine-tuned version of [01-ai/Yi-1.5-9B-Chat-16K](https://huggingface.co/01-ai/Yi-1.5-9B-Chat-16K) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5643
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 15
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0003 | 1 | 2.9660 |
| 2.9238 | 0.0015 | 5 | 2.7996 |
| 2.6059 | 0.0031 | 10 | 2.1081 |
| 1.5844 | 0.0046 | 15 | 1.2249 |
| 1.1039 | 0.0062 | 20 | 0.6434 |
| 0.5317 | 0.0077 | 25 | 0.5736 |
| 0.6317 | 0.0092 | 30 | 0.5643 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/BlackSheep-RP-12B-GGUF | mradermacher | "2024-11-14T23:25:09Z" | 147 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:KOOWEEYUS/BlackSheep-RP-12B",
"base_model:quantized:KOOWEEYUS/BlackSheep-RP-12B",
"license:artistic-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-11-13T00:36:21Z" | ---
base_model: KOOWEEYUS/BlackSheep-RP-12B
language:
- en
library_name: transformers
license: artistic-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/KOOWEEYUS/BlackSheep-RP-12B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/BlackSheep-RP-12B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-RP-12B-GGUF/resolve/main/BlackSheep-RP-12B.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-RP-12B-GGUF/resolve/main/BlackSheep-RP-12B.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-RP-12B-GGUF/resolve/main/BlackSheep-RP-12B.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-RP-12B-GGUF/resolve/main/BlackSheep-RP-12B.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-RP-12B-GGUF/resolve/main/BlackSheep-RP-12B.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-RP-12B-GGUF/resolve/main/BlackSheep-RP-12B.Q4_0_4_4.gguf) | Q4_0_4_4 | 7.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-RP-12B-GGUF/resolve/main/BlackSheep-RP-12B.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-RP-12B-GGUF/resolve/main/BlackSheep-RP-12B.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-RP-12B-GGUF/resolve/main/BlackSheep-RP-12B.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-RP-12B-GGUF/resolve/main/BlackSheep-RP-12B.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-RP-12B-GGUF/resolve/main/BlackSheep-RP-12B.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-RP-12B-GGUF/resolve/main/BlackSheep-RP-12B.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
openlm-research/open_llama_3b_step_200000 | openlm-research | "2024-11-20T22:55:37Z" | 5 | 0 | null | [
"safetensors",
"llama",
"dataset:togethercomputer/RedPajama-Data-1T",
"arxiv:2302.13971",
"license:apache-2.0",
"region:us"
] | null | "2024-11-20T03:50:20Z" | ---
license: apache-2.0
datasets:
- togethercomputer/RedPajama-Data-1T
---
# OpenLLaMA: An Open Reproduction of LLaMA
In this repo, we present a permissively licensed open source reproduction of Meta AI's [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) large language model. We are releasing a 7B and 3B model trained on 1T tokens, as well as the preview of a 13B model trained on 600B tokens. We provide PyTorch and JAX weights of pre-trained OpenLLaMA models, as well as evaluation results and comparison against the original LLaMA models. Please see the [project homepage of OpenLLaMA](https://github.com/openlm-research/open_llama) for more details.
## Weights Release, License and Usage
We release the weights in two formats: an EasyLM format to be use with our [EasyLM framework](https://github.com/young-geng/EasyLM), and a PyTorch format to be used with the [Hugging Face transformers](https://huggingface.co/docs/transformers/index) library. Both our training framework EasyLM and the checkpoint weights are licensed permissively under the Apache 2.0 license.
### Loading the Weights with Hugging Face Transformers
Preview checkpoints can be directly loaded from Hugging Face Hub. **Please note that it is advised to avoid using the Hugging Face fast tokenizer for now, as we’ve observed that the auto-converted fast tokenizer sometimes gives incorrect tokenizations.** This can be achieved by directly using the `LlamaTokenizer` class, or passing in the `use_fast=False` option for the `AutoTokenizer` class. See the following example for usage.
```python
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM
model_path = 'openlm-research/open_llama_3b'
# model_path = 'openlm-research/open_llama_7b'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float16, device_map='auto',
)
prompt = 'Q: What is the largest animal?\nA:'
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=32
)
print(tokenizer.decode(generation_output[0]))
```
For more advanced usage, please follow the [transformers LLaMA documentation](https://huggingface.co/docs/transformers/main/model_doc/llama).
### Evaluating with LM-Eval-Harness
The model can be evaluated with [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness). However, due to the aforementioned tokenizer issue, we need to avoid using the fast tokenizer to obtain the correct results. This can be achieved by passing in `use_fast=False` to [this part of lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness/blob/4b701e228768052cfae9043dca13e82052ca5eea/lm_eval/models/huggingface.py#LL313C9-L316C10), as shown in the example below:
```python
tokenizer = self.AUTO_TOKENIZER_CLASS.from_pretrained(
pretrained if tokenizer is None else tokenizer,
revision=revision + ("/" + subfolder if subfolder is not None else ""),
use_fast=False
)
```
### Loading the Weights with EasyLM
For using the weights in our EasyLM framework, please refer to the [LLaMA documentation of EasyLM](https://github.com/young-geng/EasyLM/blob/main/docs/llama.md). Note that unlike the original LLaMA model, our OpenLLaMA tokenizer and weights are trained completely from scratch so it is no longer needed to obtain the original LLaMA tokenizer and weights. Note that we use BOS (beginning of sentence) token (id=1) during training, so it is best to prepend this token for best performance during few-shot evaluation.
## Dataset and Training
We train our models on the [RedPajama](https://www.together.xyz/blog/redpajama) dataset released by [Together](https://www.together.xyz/), which is a reproduction of the LLaMA training dataset containing over 1.2 trillion tokens. We follow the exactly same preprocessing steps and training hyperparameters as the original LLaMA paper, including model architecture, context length, training steps, learning rate schedule, and optimizer. The only difference between our setting and the original one is the dataset used: OpenLLaMA employs the RedPajama dataset rather than the one utilized by the original LLaMA.
We train the models on cloud TPU-v4s using [EasyLM](https://github.com/young-geng/EasyLM), a JAX based training pipeline we developed for training and fine-tuning large language models. We employ a combination of normal data parallelism and [fully sharded data parallelism (also know as ZeRO stage 3)](https://engineering.fb.com/2021/07/15/open-source/fsdp/) to balance the training throughput and memory usage. Overall we reach a throughput of over 2200 tokens / second / TPU-v4 chip for our 7B model.
## Evaluation
We evaluated OpenLLaMA on a wide range of tasks using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). The LLaMA results are generated by running the original LLaMA model on the same evaluation metrics. We note that our results for the LLaMA model differ slightly from the original LLaMA paper, which we believe is a result of different evaluation protocols. Similar differences have been reported in [this issue of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/issues/443). Additionally, we present the results of GPT-J, a 6B parameter model trained on the [Pile](https://pile.eleuther.ai/) dataset by [EleutherAI](https://www.eleuther.ai/).
The original LLaMA model was trained for 1 trillion tokens and GPT-J was trained for 500 billion tokens. We present the results in the table below. OpenLLaMA exhibits comparable performance to the original LLaMA and GPT-J across a majority of tasks, and outperforms them in some tasks.
| **Task/Metric** | GPT-J 6B | LLaMA 7B | OpenLLaMA 7B | OpenLLaMA 3B | OpenLLaMA 13B 600BT |
| ---------------------- | -------- | -------- | ------------ | ------------ | ------------------- |
| anli_r1/acc | 0.32 | 0.35 | 0.33 | 0.33 | 0.33 |
| anli_r2/acc | 0.34 | 0.34 | 0.36 | 0.32 | 0.35 |
| anli_r3/acc | 0.35 | 0.37 | 0.38 | 0.35 | 0.38 |
| arc_challenge/acc | 0.34 | 0.39 | 0.37 | 0.34 | 0.39 |
| arc_challenge/acc_norm | 0.37 | 0.41 | 0.38 | 0.37 | 0.42 |
| arc_easy/acc | 0.67 | 0.68 | 0.72 | 0.69 | 0.74 |
| arc_easy/acc_norm | 0.62 | 0.52 | 0.68 | 0.65 | 0.70 |
| ddboolq/acc | 0.50 | 0.56 | 0.53 | 0.49 | 0.71 |
| hellaswag/acc | 0.36 | 0.36 | 0.63 | 0.43 | 0.54 |
| hellaswag/acc_norm | 0.66 | 0.73 | 0.72 | 0.67 | 0.73 |
| openbookqa/acc | 0.29 | 0.29 | 0.30 | 0.27 | 0.30 |
| openbookqa/acc_norm | 0.38 | 0.41 | 0.40 | 0.40 | 0.41 |
| piqa/acc | 0.75 | 0.78 | 0.76 | 0.75 | 0.77 |
| piqa/acc_norm | 0.76 | 0.78 | 0.77 | 0.76 | 0.78 |
| record/em | 0.88 | 0.91 | 0.89 | 0.88 | 0.90 |
| record/f1 | 0.89 | 0.91 | 0.90 | 0.89 | 0.90 |
| rte/acc | 0.54 | 0.56 | 0.60 | 0.58 | 0.65 |
| truthfulqa_mc/mc1 | 0.20 | 0.21 | 0.23 | 0.22 | 0.22 |
| truthfulqa_mc/mc2 | 0.36 | 0.34 | 0.35 | 0.35 | 0.35 |
| wic/acc | 0.50 | 0.50 | 0.51 | 0.48 | 0.49 |
| winogrande/acc | 0.64 | 0.68 | 0.67 | 0.62 | 0.67 |
| Average | 0.51 | 0.53 | 0.55 | 0.52 | 0.56 |
We removed the task CB and WSC from our benchmark, as our model performs suspiciously well on these two tasks. We hypothesize that there could be a benchmark data contamination in the training set.
## Contact
We would love to get feedback from the community. If you have any questions, please open an issue or contact us.
OpenLLaMA is developed by:
[Xinyang Geng](https://young-geng.xyz/)* and [Hao Liu](https://www.haoliu.site/)* from Berkeley AI Research.
*Equal Contribution
## Acknowledgment
We thank the [Google TPU Research Cloud](https://sites.research.google/trc/about/) program for providing part of the computation resources. We’d like to specially thank Jonathan Caton from TPU Research Cloud for helping us organizing compute resources, Rafi Witten from the Google Cloud team and James Bradbury from the Google JAX team for helping us optimizing our training throughput. We’d also want to thank Charlie Snell, Gautier Izacard, Eric Wallace, Lianmin Zheng and our user community for the discussions and feedback.
The OpenLLaMA 13B model is trained in collaboration with [Stability AI](https://stability.ai/), and we thank Stability AI for providing the computation resources. We’d like to especially thank David Ha and Shivanshu Purohit for the coordinating the logistics and providing engineering support.
## Reference
If you found OpenLLaMA useful in your research or applications, please cite using the following BibTeX:
```
@software{openlm2023openllama,
author = {Geng, Xinyang and Liu, Hao},
title = {OpenLLaMA: An Open Reproduction of LLaMA},
month = May,
year = 2023,
url = {https://github.com/openlm-research/open_llama}
}
```
```
@software{together2023redpajama,
author = {Together Computer},
title = {RedPajama-Data: An Open Source Recipe to Reproduce LLaMA training dataset},
month = April,
year = 2023,
url = {https://github.com/togethercomputer/RedPajama-Data}
}
```
```
@article{touvron2023llama,
title={Llama: Open and efficient foundation language models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and others},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```
|
mrferr3t/5da8e928-2736-4b58-8aed-15cfb7013228 | mrferr3t | "2025-02-06T15:50:58Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Coder-7B",
"base_model:adapter:unsloth/Qwen2.5-Coder-7B",
"license:apache-2.0",
"region:us"
] | null | "2025-02-06T15:36:25Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Coder-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5da8e928-2736-4b58-8aed-15cfb7013228
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
auto_find_batch_size: true
base_model: unsloth/Qwen2.5-Coder-7B
bf16: auto
chat_template: llama3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- 4e29391d28622e8f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4e29391d28622e8f_train_data.json
type:
field_input: ruby_text
field_instruction: speaker
field_output: text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 3
early_stopping_threshold: 0.001
eval_max_new_tokens: 128
eval_steps: 40
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/5da8e928-2736-4b58-8aed-15cfb7013228
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0003
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 100
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
micro_batch_size: 32
mlflow_experiment_name: /tmp/4e29391d28622e8f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 50
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
s2_attention: null
sample_packing: false
save_steps: 40
saves_per_epoch: 0
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.02
wandb_entity: null
wandb_mode: online
wandb_name: c7d834e3-4b05-4ec3-9ce6-deb01a206c99
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: c7d834e3-4b05-4ec3-9ce6-deb01a206c99
warmup_ratio: 0.05
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 5da8e928-2736-4b58-8aed-15cfb7013228
This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-7B](https://huggingface.co/unsloth/Qwen2.5-Coder-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0195
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 524
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0012 | 1 | 1.1939 |
| No log | 0.0476 | 40 | 0.7033 |
| No log | 0.0952 | 80 | 0.0937 |
| 0.6012 | 0.1429 | 120 | 0.0486 |
| 0.6012 | 0.1905 | 160 | 0.0427 |
| 0.0548 | 0.2381 | 200 | 0.0334 |
| 0.0548 | 0.2857 | 240 | 0.0311 |
| 0.0548 | 0.3333 | 280 | 0.0310 |
| 0.0366 | 0.3810 | 320 | 0.0335 |
| 0.0366 | 0.4286 | 360 | 0.0221 |
| 0.0289 | 0.4762 | 400 | 0.0251 |
| 0.0289 | 0.5238 | 440 | 0.0213 |
| 0.0289 | 0.5714 | 480 | 0.0210 |
| 0.0333 | 0.6190 | 520 | 0.0197 |
| 0.0333 | 0.6667 | 560 | 0.0205 |
| 0.0286 | 0.7143 | 600 | 0.0166 |
| 0.0286 | 0.7619 | 640 | 0.0170 |
| 0.0286 | 0.8095 | 680 | 0.0171 |
| 0.0237 | 0.8571 | 720 | 0.0195 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Rendel/q-FrozenLake-v1-4x4-noSlippery | Rendel | "2023-03-08T15:41:14Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-08T15:41:08Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Rendel/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
StepLaw/StepLaw-N_214M-D_11.0B-LR2.210e-02-BS65536 | StepLaw | "2025-04-06T01:06:14Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"step1",
"text-generation",
"StepLaw",
"causal-lm",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-06T01:04:49Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
Magpie-Align/Llama-3-8B-Magpie-Pro-SFT-100K-v0.1 | Magpie-Align | "2024-07-03T05:31:24Z" | 7 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"axolotl",
"generated_from_trainer",
"conversational",
"arxiv:2406.08464",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:finetune:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-31T17:38:44Z" | ---
license: llama3
base_model: meta-llama/Meta-Llama-3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: Llama-3-8B-Magpie-Pro-SFT-100K-v0.1
results: []
---
# Magpie-Align/Llama-3-8B-Magpie-Pro-SFT-100K-v0.1
Project Web: [https://magpie-align.github.io/](https://magpie-align.github.io/)
Arxiv Technical Report: [https://arxiv.org/abs/2406.08464](https://arxiv.org/abs/2406.08464)
Codes: [https://github.com/magpie-align/magpie](https://github.com/magpie-align/magpie)
## About This Model
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on **First 100K data** of [Magpie-Align/Magpie-Pro-300K-Filtered](https://huggingface.co/datasets/Magpie-Align/Magpie-Pro-300K-Filtered) dataset.
Please use [Magpie-Align/Llama-3-8B-Magpie-Pro-SFT-v0.1](https://huggingface.co/Magpie-Align/Llama-3-8B-Magpie-Pro-SFT-v0.1) with better performance.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8869 | 0.0036 | 1 | 0.9139 |
| 0.5854 | 0.3344 | 92 | 0.6158 |
| 0.5218 | 0.6688 | 184 | 0.5455 |
| 0.4878 | 1.0032 | 276 | 0.5125 |
| 0.3734 | 1.3226 | 368 | 0.5091 |
| 0.3647 | 1.6570 | 460 | 0.5056 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: meta-llama/Meta-Llama-3-8B
model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: Magpie-Align/Magpie-Pro-300K-Filtered-First100K
type: sharegpt
conversation: llama3
dataset_prepared_path: last_run_prepared
val_set_size: 0.001
output_dir: ./out_Llama-3-8B-Magpie-Pro-100K-FilteredL
sequence_len: 8192
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true
gradient_accumulation_steps: 8
micro_batch_size: 1
num_epochs: 2
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 2e-5
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
early_stopping_patience:
resume_from_checkpoint:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 100
evals_per_epoch: 3
eval_table_size:
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
pad_token: <|end_of_text|>
```
</details><br>
|
czz23/journal-setfit-model | czz23 | "2023-06-25T10:37:43Z" | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | "2023-06-25T10:34:44Z" | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# /var/folders/hy/pfb50fjs4zd8cznz_yjwyw8w0000gp/T/tmpg6l_fkqj/czz23/journal-setfit-model
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("/var/folders/hy/pfb50fjs4zd8cznz_yjwyw8w0000gp/T/tmpg6l_fkqj/czz23/journal-setfit-model")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
xszhou/ppo-LunarLander-v2 | xszhou | "2023-08-24T03:44:40Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-08-24T03:44:16Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 272.49 +/- 17.20
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
doc2query/msmarco-portuguese-mt5-base-v1 | doc2query | "2022-04-29T12:08:25Z" | 13 | 10 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"pt",
"dataset:unicamp-dl/mmarco",
"arxiv:1904.08375",
"arxiv:2104.08663",
"arxiv:2112.07577",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-04-29T12:07:58Z" | ---
language: pt
datasets:
- unicamp-dl/mmarco
widget:
- text: "Python é uma linguagem de programação de alto nível, interpretada de script, imperativa, orientada a objetos, funcional, de tipagem dinâmica e forte. Foi lançada por Guido van Rossum em 1991. Atualmente, possui um modelo de desenvolvimento comunitário, aberto e gerenciado pela organização sem fins lucrativos Python Software Foundation. Apesar de várias partes da linguagem possuírem padrões e especificações formais, a linguagem, como um todo, não é formalmente especificada. O padrão de facto é a implementação CPython."
license: apache-2.0
---
# doc2query/msmarco-portuguese-mt5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html#gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model_name = 'doc2query/msmarco-portuguese-mt5-base-v1'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
text = "Python é uma linguagem de programação de alto nível, interpretada de script, imperativa, orientada a objetos, funcional, de tipagem dinâmica e forte. Foi lançada por Guido van Rossum em 1991. Atualmente, possui um modelo de desenvolvimento comunitário, aberto e gerenciado pela organização sem fins lucrativos Python Software Foundation. Apesar de várias partes da linguagem possuírem padrões e especificações formais, a linguagem, como um todo, não é formalmente especificada. O padrão de facto é a implementação CPython."
def create_queries(para):
input_ids = tokenizer.encode(para, return_tensors='pt')
with torch.no_grad():
# Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality
sampling_outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
top_k=10,
num_return_sequences=5
)
# Here we use Beam-search. It generates better quality queries, but with less diversity
beam_outputs = model.generate(
input_ids=input_ids,
max_length=64,
num_beams=5,
no_repeat_ngram_size=2,
num_return_sequences=5,
early_stopping=True
)
print("Paragraph:")
print(para)
print("\nBeam Outputs:")
for i in range(len(beam_outputs)):
query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
print("\nSampling Outputs:")
for i in range(len(sampling_outputs)):
query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
create_queries(text)
```
**Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it.
## Training
This model fine-tuned [google/mt5-base](https://huggingface.co/google/mt5-base) for 66k training steps (4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp-dl/mMARCO).
|
asenella/ms_MMVAEPlus_beta_10_scale_True_seed_0 | asenella | "2023-07-27T11:53:07Z" | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | "2023-07-27T11:53:05Z" | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
mradermacher/NT-Java-1.1B-GGUF | mradermacher | "2024-07-05T10:10:44Z" | 134 | 0 | transformers | [
"transformers",
"gguf",
"NarrowTransformer",
"code",
"dataset:bigcode/starcoderdata",
"base_model:infosys/NT-Java-1.1B",
"base_model:quantized:infosys/NT-Java-1.1B",
"license:bigcode-openrail-m",
"endpoints_compatible",
"region:us"
] | null | "2024-07-05T10:01:11Z" | ---
base_model: infosys/NT-Java-1.1B
datasets:
- bigcode/starcoderdata
extra_gated_fields:
I accept the above license agreement, and will use the Model complying with the set of use restrictions and sharing requirements: checkbox
extra_gated_prompt: "## Model License Agreement\nPlease read the BigCode [OpenRAIL-M
license](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement)
agreement before accepting it.\n "
language:
- code
library_name: transformers
license: bigcode-openrail-m
quantized_by: mradermacher
tags:
- NarrowTransformer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/infosys/NT-Java-1.1B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NT-Java-1.1B-GGUF/resolve/main/NT-Java-1.1B.Q2_K.gguf) | Q2_K | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/NT-Java-1.1B-GGUF/resolve/main/NT-Java-1.1B.IQ3_XS.gguf) | IQ3_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/NT-Java-1.1B-GGUF/resolve/main/NT-Java-1.1B.IQ3_S.gguf) | IQ3_S | 0.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/NT-Java-1.1B-GGUF/resolve/main/NT-Java-1.1B.Q3_K_S.gguf) | Q3_K_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/NT-Java-1.1B-GGUF/resolve/main/NT-Java-1.1B.IQ3_M.gguf) | IQ3_M | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/NT-Java-1.1B-GGUF/resolve/main/NT-Java-1.1B.Q3_K_M.gguf) | Q3_K_M | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NT-Java-1.1B-GGUF/resolve/main/NT-Java-1.1B.IQ4_XS.gguf) | IQ4_XS | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/NT-Java-1.1B-GGUF/resolve/main/NT-Java-1.1B.Q3_K_L.gguf) | Q3_K_L | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/NT-Java-1.1B-GGUF/resolve/main/NT-Java-1.1B.Q4_K_S.gguf) | Q4_K_S | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NT-Java-1.1B-GGUF/resolve/main/NT-Java-1.1B.Q4_K_M.gguf) | Q4_K_M | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NT-Java-1.1B-GGUF/resolve/main/NT-Java-1.1B.Q5_K_S.gguf) | Q5_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/NT-Java-1.1B-GGUF/resolve/main/NT-Java-1.1B.Q5_K_M.gguf) | Q5_K_M | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/NT-Java-1.1B-GGUF/resolve/main/NT-Java-1.1B.Q6_K.gguf) | Q6_K | 1.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/NT-Java-1.1B-GGUF/resolve/main/NT-Java-1.1B.Q8_0.gguf) | Q8_0 | 1.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/NT-Java-1.1B-GGUF/resolve/main/NT-Java-1.1B.f16.gguf) | f16 | 2.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
jethrowang/vanilla-whisper-medium_evaluated_on_lavalier | jethrowang | "2024-08-17T17:45:13Z" | 5 | 0 | null | [
"tensorboard",
"safetensors",
"whisper",
"generated_from_trainer",
"zh",
"dataset:formospeech/hat_asr_aligned",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"region:us"
] | null | "2024-08-05T13:27:27Z" | ---
language:
- zh
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
datasets:
- formospeech/hat_asr_aligned
model-index:
- name: Whisper Medium Hakka Condenser
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Medium Hakka Condenser
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the HAT ASR Aligned dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0191
- eval_cer: 0.6184
- eval_runtime: 2123.8167
- eval_samples_per_second: 2.147
- eval_steps_per_second: 0.134
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1521
- training_steps: 15215
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.42.3
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
LeroyDyer/Llava_1.5_7b_4_bit | LeroyDyer | "2024-03-23T12:59:16Z" | 102 | 1 | transformers | [
"transformers",
"safetensors",
"llava",
"image-text-to-text",
"image-to-text",
"en",
"dataset:liuhaotian/LLaVA-Instruct-150K",
"region:us"
] | image-to-text | "2024-03-23T12:46:14Z" | ---
language:
- en
pipeline_tag: image-to-text
inference: false
arxiv: 2304.08485
datasets:
- liuhaotian/LLaVA-Instruct-150K
---
# LLaVA Model Card

Below is the model card of Llava model 7b, which is copied from the original Llava model card that you can find [here](https://huggingface.co/liuhaotian/llava-v1.5-13b).
Check out also the Google Colab demo to run Llava on a free-tier Google Colab instance: [](https://colab.research.google.com/drive/1qsl6cd2c8gGtEW1xV5io7S8NHh-Cp1TV?usp=sharing)
Or check out our Spaces demo! [](https://huggingface.co/spaces/llava-hf/llava-4bit)
## Model details
**Model type:**
LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data.
It is an auto-regressive language model, based on the transformer architecture.
**Model date:**
LLaVA-v1.5-7B was trained in September 2023.
**Paper or resources for more information:**
https://llava-vl.github.io/
## How to use the model
First, make sure to have `transformers >= 4.35.3`.
The model supports multi-image and multi-prompt generation. Meaning that you can pass multiple images in your prompt. Make sure also to follow the correct prompt template (`USER: xxx\nASSISTANT:`) and add the token `<image>` to the location where you want to query images:
### Using `pipeline`:
Below we used [`"llava-hf/llava-1.5-7b-hf"`](https://huggingface.co/llava-hf/llava-1.5-7b-hf) checkpoint.
```python
from transformers import pipeline
from PIL import Image
import requests
model_id = "llava-hf/llava-1.5-7b-hf"
pipe = pipeline("image-to-text", model=model_id)
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg"
image = Image.open(requests.get(url, stream=True).raw)
prompt = "USER: <image>\nWhat does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud\nASSISTANT:"
outputs = pipe(image, prompt=prompt, generate_kwargs={"max_new_tokens": 200})
print(outputs)
>>> {"generated_text": "\nUSER: What does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud\nASSISTANT: Lava"}
```
### Using pure `transformers`:
Below is an example script to run generation in `float16` precision on a GPU device:
```python
import requests
from PIL import Image
import torch
from transformers import AutoProcessor, LlavaForConditionalGeneration
model_id = "llava-hf/llava-1.5-7b-hf"
prompt = "USER: <image>\nWhat are these?\nASSISTANT:"
image_file = "http://images.cocodataset.org/val2017/000000039769.jpg"
model = LlavaForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
).to(0)
processor = AutoProcessor.from_pretrained(model_id)
raw_image = Image.open(requests.get(image_file, stream=True).raw)
inputs = processor(prompt, raw_image, return_tensors='pt').to(0, torch.float16)
output = model.generate(**inputs, max_new_tokens=200, do_sample=False)
print(processor.decode(output[0][2:], skip_special_tokens=True))
```
### Model optimization
#### 4-bit quantization through `bitsandbytes` library
First make sure to install `bitsandbytes`, `pip install bitsandbytes` and make sure to have access to a CUDA compatible GPU device. Simply change the snippet above with:
```diff
model = LlavaForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
+ load_in_4bit=True
)
```
#### Use Flash-Attention 2 to further speed-up generation
First make sure to install `flash-attn`. Refer to the [original repository of Flash Attention](https://github.com/Dao-AILab/flash-attention) regarding that package installation. Simply change the snippet above with:
```diff
model = LlavaForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
+ use_flash_attention_2=True
).to(0)
```
## License
Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved. |
pranaydeeps/lettuce_pos_nl_mono | pranaydeeps | "2024-05-06T12:38:44Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-05-06T12:38:21Z" | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: pos_final_mono_nl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pos_final_mono_nl
This model is a fine-tuned version of [pdelobelle/robbert-v2-dutch-base](https://huggingface.co/pdelobelle/robbert-v2-dutch-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1115
- Precision: 0.9783
- Recall: 0.9784
- F1: 0.9783
- Accuracy: 0.9791
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 40.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 69 | 3.7703 | 0.2597 | 0.1252 | 0.1689 | 0.2575 |
| No log | 2.0 | 138 | 1.0148 | 0.8058 | 0.8008 | 0.8033 | 0.8066 |
| No log | 3.0 | 207 | 0.3402 | 0.9302 | 0.9278 | 0.9290 | 0.9299 |
| No log | 4.0 | 276 | 0.2016 | 0.9559 | 0.9551 | 0.9555 | 0.9561 |
| No log | 5.0 | 345 | 0.1486 | 0.9643 | 0.9638 | 0.9641 | 0.9648 |
| No log | 6.0 | 414 | 0.1206 | 0.9697 | 0.9696 | 0.9697 | 0.9702 |
| No log | 7.0 | 483 | 0.1063 | 0.9720 | 0.9719 | 0.9720 | 0.9727 |
| 1.2192 | 8.0 | 552 | 0.0983 | 0.9734 | 0.9735 | 0.9735 | 0.9742 |
| 1.2192 | 9.0 | 621 | 0.0947 | 0.9746 | 0.9747 | 0.9746 | 0.9754 |
| 1.2192 | 10.0 | 690 | 0.0913 | 0.9753 | 0.9755 | 0.9754 | 0.9761 |
| 1.2192 | 11.0 | 759 | 0.0885 | 0.9761 | 0.9763 | 0.9762 | 0.9770 |
| 1.2192 | 12.0 | 828 | 0.0877 | 0.9764 | 0.9765 | 0.9764 | 0.9772 |
| 1.2192 | 13.0 | 897 | 0.0878 | 0.9767 | 0.9769 | 0.9768 | 0.9775 |
| 1.2192 | 14.0 | 966 | 0.0873 | 0.9767 | 0.9769 | 0.9768 | 0.9776 |
| 0.0688 | 15.0 | 1035 | 0.0877 | 0.9771 | 0.9773 | 0.9772 | 0.9779 |
| 0.0688 | 16.0 | 1104 | 0.0878 | 0.9773 | 0.9774 | 0.9773 | 0.9781 |
| 0.0688 | 17.0 | 1173 | 0.0897 | 0.9772 | 0.9773 | 0.9773 | 0.9781 |
| 0.0688 | 18.0 | 1242 | 0.0909 | 0.9775 | 0.9776 | 0.9776 | 0.9783 |
| 0.0688 | 19.0 | 1311 | 0.0917 | 0.9776 | 0.9778 | 0.9777 | 0.9785 |
| 0.0688 | 20.0 | 1380 | 0.0924 | 0.9778 | 0.9780 | 0.9779 | 0.9787 |
| 0.0688 | 21.0 | 1449 | 0.0949 | 0.9777 | 0.9779 | 0.9778 | 0.9785 |
| 0.0366 | 22.0 | 1518 | 0.0956 | 0.9776 | 0.9777 | 0.9777 | 0.9784 |
| 0.0366 | 23.0 | 1587 | 0.0962 | 0.9778 | 0.9780 | 0.9779 | 0.9786 |
| 0.0366 | 24.0 | 1656 | 0.0992 | 0.9777 | 0.9780 | 0.9779 | 0.9786 |
| 0.0366 | 25.0 | 1725 | 0.0999 | 0.9779 | 0.9781 | 0.9780 | 0.9787 |
| 0.0366 | 26.0 | 1794 | 0.1007 | 0.9780 | 0.9782 | 0.9781 | 0.9789 |
| 0.0366 | 27.0 | 1863 | 0.1022 | 0.9781 | 0.9782 | 0.9782 | 0.9789 |
| 0.0366 | 28.0 | 1932 | 0.1030 | 0.9781 | 0.9783 | 0.9782 | 0.9790 |
| 0.0226 | 29.0 | 2001 | 0.1055 | 0.9781 | 0.9782 | 0.9781 | 0.9789 |
| 0.0226 | 30.0 | 2070 | 0.1057 | 0.9780 | 0.9782 | 0.9781 | 0.9789 |
| 0.0226 | 31.0 | 2139 | 0.1067 | 0.9780 | 0.9781 | 0.9780 | 0.9788 |
| 0.0226 | 32.0 | 2208 | 0.1077 | 0.9780 | 0.9782 | 0.9781 | 0.9789 |
| 0.0226 | 33.0 | 2277 | 0.1085 | 0.9780 | 0.9781 | 0.9781 | 0.9789 |
| 0.0226 | 34.0 | 2346 | 0.1094 | 0.9781 | 0.9782 | 0.9781 | 0.9789 |
| 0.0226 | 35.0 | 2415 | 0.1095 | 0.9783 | 0.9784 | 0.9783 | 0.9791 |
| 0.0226 | 36.0 | 2484 | 0.1101 | 0.9780 | 0.9782 | 0.9781 | 0.9789 |
| 0.0159 | 37.0 | 2553 | 0.1114 | 0.9782 | 0.9784 | 0.9783 | 0.9791 |
| 0.0159 | 38.0 | 2622 | 0.1111 | 0.9782 | 0.9784 | 0.9783 | 0.9791 |
| 0.0159 | 39.0 | 2691 | 0.1114 | 0.9782 | 0.9784 | 0.9783 | 0.9791 |
| 0.0159 | 40.0 | 2760 | 0.1115 | 0.9783 | 0.9784 | 0.9783 | 0.9791 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.0
- Datasets 2.18.0
- Tokenizers 0.13.2
|
Ranjit/test_4 | Ranjit | "2023-10-01T20:32:24Z" | 182 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:AmazonScience/massive",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-10-01T20:31:31Z" | ---
base_model: xxxxxxxxx
tags:
- generated_from_trainer
datasets:
- AmazonScience/massive
model-index:
- name: massive_indo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# massive_indo
This model is a fine-tuned version of [xxxxxxxxx](https://huggingface.co/xxxxxxxxx) on the massive dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1952
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.8949 | 2.08 | 100 | 4.8610 |
| 4.5401 | 4.17 | 200 | 4.5439 |
| 4.2447 | 6.25 | 300 | 4.2866 |
| 4.0005 | 8.33 | 400 | 4.0553 |
| 3.7874 | 10.42 | 500 | 3.8500 |
| 3.5807 | 12.5 | 600 | 3.6576 |
| 3.3725 | 14.58 | 700 | 3.4922 |
| 3.1977 | 16.67 | 800 | 3.3297 |
| 3.0234 | 18.75 | 900 | 3.1869 |
| 2.8863 | 20.83 | 1000 | 3.0530 |
| 2.7463 | 22.92 | 1100 | 2.9420 |
| 2.6025 | 25.0 | 1200 | 2.8200 |
| 2.4935 | 27.08 | 1300 | 2.7207 |
| 2.3695 | 29.17 | 1400 | 2.6279 |
| 2.2666 | 31.25 | 1500 | 2.5470 |
| 2.1584 | 33.33 | 1600 | 2.4736 |
| 2.0767 | 35.42 | 1700 | 2.4043 |
| 2.0374 | 37.5 | 1800 | 2.3516 |
| 1.9982 | 39.58 | 1900 | 2.3028 |
| 1.9241 | 41.67 | 2000 | 2.2679 |
| 1.8844 | 43.75 | 2100 | 2.2384 |
| 1.8488 | 45.83 | 2200 | 2.2143 |
| 1.8441 | 47.92 | 2300 | 2.1988 |
| 1.8368 | 50.0 | 2400 | 2.1952 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Tokenizers 0.13.3
|
SUUUUUMIN/moma_ver1 | SUUUUUMIN | "2025-02-27T03:14:46Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-26T05:42:52Z" | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** SUUUUUMIN
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tuanna08go/fa190837-d964-45ef-b324-ce596c9962cd | tuanna08go | "2025-01-07T05:30:43Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.1-Storm-8B",
"base_model:adapter:unsloth/Llama-3.1-Storm-8B",
"license:llama3.1",
"region:us"
] | null | "2025-01-07T05:11:51Z" | ---
library_name: peft
license: llama3.1
base_model: unsloth/Llama-3.1-Storm-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fa190837-d964-45ef-b324-ce596c9962cd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.1-Storm-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e63fedd3cb9e5a32_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e63fedd3cb9e5a32_train_data.json
type:
field_input: src
field_instruction: lp
field_output: ref
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: tuanna08go/fa190837-d964-45ef-b324-ce596c9962cd
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/e63fedd3cb9e5a32_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fa190837-d964-45ef-b324-ce596c9962cd
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: fa190837-d964-45ef-b324-ce596c9962cd
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# fa190837-d964-45ef-b324-ce596c9962cd
This model is a fine-tuned version of [unsloth/Llama-3.1-Storm-8B](https://huggingface.co/unsloth/Llama-3.1-Storm-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1036
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0004 | 1 | 2.2431 |
| 1.7039 | 0.0041 | 10 | 1.7575 |
| 1.157 | 0.0082 | 20 | 1.1689 |
| 1.0457 | 0.0124 | 30 | 1.1273 |
| 0.9761 | 0.0165 | 40 | 1.1081 |
| 0.9259 | 0.0206 | 50 | 1.1036 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
finnstrom3693/opt-125m-lss-en | finnstrom3693 | "2024-09-19T22:34:11Z" | 89 | 0 | transformers | [
"transformers",
"safetensors",
"opt",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-09-19T22:33:35Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
EdBerg/finance_finetuned_test | EdBerg | "2024-05-01T02:48:46Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-04-30T23:48:49Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
srisidp/qwen2-art-style-epoch-1 | srisidp | "2025-03-06T21:12:27Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | "2025-03-06T21:02:37Z" | ---
base_model: Qwen/Qwen2-VL-7B-Instruct
library_name: transformers
model_name: qwen2-7b-instruct-art-style
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen2-7b-instruct-art-style
This model is a fine-tuned version of [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="srisidp/qwen2-7b-instruct-art-style", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/srisidp9/qwen2-7b-instruct-art-style3/runs/nkab87j2)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.50.0.dev0
- Pytorch: 2.4.1+cu121
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
NasimB/switchboard-rarity-seed | NasimB | "2023-07-30T00:46:51Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-07-29T21:29:44Z" | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: switchboard-rarity-seed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# switchboard-rarity-seed
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0985
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.3581 | 0.29 | 500 | 5.3466 |
| 5.0332 | 0.58 | 1000 | 4.9336 |
| 4.7065 | 0.87 | 1500 | 4.6924 |
| 4.4439 | 1.17 | 2000 | 4.5465 |
| 4.2929 | 1.46 | 2500 | 4.4328 |
| 4.1869 | 1.75 | 3000 | 4.3248 |
| 4.0802 | 2.04 | 3500 | 4.2481 |
| 3.8877 | 2.33 | 4000 | 4.2060 |
| 3.8547 | 2.62 | 4500 | 4.1542 |
| 3.83 | 2.92 | 5000 | 4.0982 |
| 3.6375 | 3.21 | 5500 | 4.0946 |
| 3.5896 | 3.5 | 6000 | 4.0648 |
| 3.5596 | 3.79 | 6500 | 4.0309 |
| 3.474 | 4.08 | 7000 | 4.0282 |
| 3.3101 | 4.37 | 7500 | 4.0247 |
| 3.3055 | 4.66 | 8000 | 4.0122 |
| 3.2891 | 4.96 | 8500 | 3.9981 |
| 3.1562 | 5.25 | 9000 | 4.0102 |
| 3.1289 | 5.54 | 9500 | 4.0093 |
| 3.1216 | 5.83 | 10000 | 4.0085 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
John6666/real-horny-pro-fp8-flux | John6666 | "2024-08-31T12:47:05Z" | 316 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"Flux",
"fp8",
"float8_e4m3fn",
"realistic",
"photorealistic",
"en",
"license:other",
"endpoints_compatible",
"diffusers:FluxPipeline",
"region:us"
] | text-to-image | "2024-08-31T12:44:42Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- Flux
- fp8
- float8_e4m3fn
- realistic
- photorealistic
---
Original model is [here](https://civitai.com/models/684924/real-horny-pro?modelVersionId=789800).
This model created by [GC](https://civitai.com/user/GC).
## Notice
This is an experimental conversion in Spaces using a homebrew script. serverless Inference API does not currently support torch float8_e4m3fn, so it does not work.
I have not been able to confirm if the conversion is working properly.
Please consider this as a test run only. |
guilxus/9acee9e3-03fa-49a8-a20b-00f6592c59cc | guilxus | "2025-02-03T04:20:34Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Llama-2-7b-128k",
"base_model:adapter:NousResearch/Yarn-Llama-2-7b-128k",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-02-03T03:54:53Z" | ---
library_name: peft
base_model: NousResearch/Yarn-Llama-2-7b-128k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9acee9e3-03fa-49a8-a20b-00f6592c59cc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Llama-2-7b-128k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a43f2de29c2b3b63_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a43f2de29c2b3b63_train_data.json
type:
field_input: context
field_instruction: instruction
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: guilxus/9acee9e3-03fa-49a8-a20b-00f6592c59cc
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/a43f2de29c2b3b63_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 401ac8c2-7126-4c66-9fc4-329f6ace3fa9
wandb_project: Gradients-On-11
wandb_run: your_name
wandb_runid: 401ac8c2-7126-4c66-9fc4-329f6ace3fa9
warmup_steps: 5
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 9acee9e3-03fa-49a8-a20b-00f6592c59cc
This model is a fine-tuned version of [NousResearch/Yarn-Llama-2-7b-128k](https://huggingface.co/NousResearch/Yarn-Llama-2-7b-128k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3687
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.429 | 0.1137 | 200 | 1.3687 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
MatthewFrank/bert-base-uncased_pytorch_1k_V01 | MatthewFrank | "2024-10-21T02:33:25Z" | 105 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-10-21T01:29:17Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mk314/PPO-1M-LunarLander-v2 | mk314 | "2024-01-01T22:39:29Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-01-01T22:39:12Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO-MLP
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 267.83 +/- 10.52
name: mean_reward
verified: false
---
# **PPO-MLP** Agent playing **LunarLander-v2**
This is a trained model of a **PPO-MLP** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
memevis/HH10 | memevis | "2025-01-14T03:15:15Z" | 49 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-14T03:08:53Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/pasa-7b-crawler-GGUF | mradermacher | "2025-02-26T16:53:31Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:CarlanLark/pasa-dataset",
"base_model:bytedance-research/pasa-7b-crawler",
"base_model:quantized:bytedance-research/pasa-7b-crawler",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-26T16:25:47Z" | ---
base_model: bytedance-research/pasa-7b-crawler
datasets:
- CarlanLark/pasa-dataset
language:
- en
library_name: transformers
license: cc-by-nc-sa-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/bytedance-research/pasa-7b-crawler
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/pasa-7b-crawler-GGUF/resolve/main/pasa-7b-crawler.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/pasa-7b-crawler-GGUF/resolve/main/pasa-7b-crawler.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/pasa-7b-crawler-GGUF/resolve/main/pasa-7b-crawler.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/pasa-7b-crawler-GGUF/resolve/main/pasa-7b-crawler.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/pasa-7b-crawler-GGUF/resolve/main/pasa-7b-crawler.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/pasa-7b-crawler-GGUF/resolve/main/pasa-7b-crawler.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/pasa-7b-crawler-GGUF/resolve/main/pasa-7b-crawler.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/pasa-7b-crawler-GGUF/resolve/main/pasa-7b-crawler.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/pasa-7b-crawler-GGUF/resolve/main/pasa-7b-crawler.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/pasa-7b-crawler-GGUF/resolve/main/pasa-7b-crawler.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/pasa-7b-crawler-GGUF/resolve/main/pasa-7b-crawler.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/pasa-7b-crawler-GGUF/resolve/main/pasa-7b-crawler.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
SongTonyLi/Llama-3.2-1B-Instruct-CPT-D_chosen-Magpie | SongTonyLi | "2024-09-29T23:58:28Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-09-29T23:56:59Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
PrunaAI/resnet18.a3_in1k-turbo-green-smashed | PrunaAI | "2024-11-13T13:23:53Z" | 2 | 0 | pruna-engine | [
"pruna-engine",
"region:us"
] | null | "2024-03-10T08:46:33Z" | ---
library_name: pruna-engine
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer">
<img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
<div style="color: #9B1DBE; font-size: 2em; font-weight: bold;">
Deprecation Notice: This model is deprecated and will no longer receive updates.
</div>
<br><br>
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed by combining quantization, xformers, jit, cuda graphs, triton.
- ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`.
1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install.
```bash
pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/
```
2. Download the model files using one of these three options.
- Option 1 - Use command line interface (CLI):
```bash
mkdir resnet18.a3_in1k-turbo-green-smashed
huggingface-cli download PrunaAI/resnet18.a3_in1k-turbo-green-smashed --local-dir resnet18.a3_in1k-turbo-green-smashed --local-dir-use-symlinks False
```
- Option 2 - Use Python:
```python
import subprocess
repo_name = "resnet18.a3_in1k-turbo-green-smashed"
subprocess.run(["mkdir", repo_name])
subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"])
```
- Option 3 - Download them manually on the HuggingFace model page.
3. Load & run the model.
```python
from pruna_engine.PrunaModel import PrunaModel
model_path = "resnet18.a3_in1k-turbo-green-smashed/model" # Specify the downloaded model path.
smashed_model = PrunaModel.load_model(model_path) # Load the model.
import torch; image = torch.rand(1, 3, 224, 224).to('cuda')
smashed_model(image)
```
## Configurations
The configuration info are in `model/smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model resnet18.a3_in1k before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
rmsdud/EnData-Alpha | rmsdud | "2024-07-12T10:27:34Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-12T08:24:19Z" | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT-i1-GGUF | mradermacher | "2025-03-03T19:53:28Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:nkpz/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT",
"base_model:quantized:nkpz/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-03-03T14:04:30Z" | ---
base_model: nkpz/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/nkpz/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT-i1-GGUF/resolve/main/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT-i1-GGUF/resolve/main/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT-i1-GGUF/resolve/main/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT-i1-GGUF/resolve/main/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT-i1-GGUF/resolve/main/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT-i1-GGUF/resolve/main/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT-i1-GGUF/resolve/main/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT-i1-GGUF/resolve/main/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT-i1-GGUF/resolve/main/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT-i1-GGUF/resolve/main/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT-i1-GGUF/resolve/main/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT-i1-GGUF/resolve/main/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT-i1-GGUF/resolve/main/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT-i1-GGUF/resolve/main/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT-i1-GGUF/resolve/main/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT-i1-GGUF/resolve/main/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT-i1-GGUF/resolve/main/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT-i1-GGUF/resolve/main/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT-i1-GGUF/resolve/main/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT-i1-GGUF/resolve/main/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT-i1-GGUF/resolve/main/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT-i1-GGUF/resolve/main/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT-i1-GGUF/resolve/main/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT-i1-GGUF/resolve/main/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
jekunz/smollm135-da02-is1-no02-sv02-ties | jekunz | "2025-04-07T08:49:07Z" | 0 | 0 | null | [
"safetensors",
"llama",
"merge",
"mergekit",
"lazymergekit",
"jekunz/smollm-135m-cpt-fineweb-icelandic",
"jekunz/smollm-135m-cpt-fineweb-swedish",
"jekunz/smollm-135m-cpt-fineweb-danish",
"jekunz/smollm-135m-cpt-fineweb-norwegian-bokmaal",
"base_model:jekunz/smollm-135m-cpt-fineweb-danish",
"base_model:merge:jekunz/smollm-135m-cpt-fineweb-danish",
"base_model:jekunz/smollm-135m-cpt-fineweb-icelandic",
"base_model:merge:jekunz/smollm-135m-cpt-fineweb-icelandic",
"base_model:jekunz/smollm-135m-cpt-fineweb-norwegian-bokmaal",
"base_model:merge:jekunz/smollm-135m-cpt-fineweb-norwegian-bokmaal",
"base_model:jekunz/smollm-135m-cpt-fineweb-swedish",
"base_model:merge:jekunz/smollm-135m-cpt-fineweb-swedish",
"region:us"
] | null | "2025-04-07T08:48:56Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
Subsets and Splits