Search is not available for this dataset
modelId
stringlengths 5
137
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-03-29 12:26:52
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 401
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-03-29 12:26:36
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Edgar404/donut-shivi-cheques_320_1 | Edgar404 | "2024-06-03T12:37:53Z" | 48 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2024-06-03T12:37:27Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
okg8697A/Casa | okg8697A | "2024-05-31T05:48:06Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-05-31T05:48:06Z" | ---
license: apache-2.0
---
|
ginic/data_seed_bs64_1_wav2vec2-large-xlsr-53-buckeye-ipa | ginic | "2025-01-06T20:54:22Z" | 10 | 0 | null | [
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"en",
"license:mit",
"region:us"
] | automatic-speech-recognition | "2025-01-06T20:53:41Z" |
---
license: mit
language:
- en
pipeline_tag: automatic-speech-recognition
---
# About
This model was created to support experiments for evaluating phonetic transcription
with the Buckeye corpus as part of https://github.com/ginic/multipa.
This is a version of facebook/wav2vec2-large-xlsr-53 fine tuned on a specific subset of the Buckeye corpus.
For details about specific model parameters, please view the config.json here or
training scripts in the scripts/buckeye_experiments folder of the GitHub repository.
# Experiment Details
Vary the random seed to select training data while keeping an even 50/50 gender split to measure statistical significance of changing training data selection. Retrain with the same model parameters, but different data seeding to measure statistical significance of data seed, keeping 50/50 gender split.
Goals:
- Establish whether data variation with the same gender makeup is statistically significant in changing performance on the test set
Params to vary:
- training data seed (--train_seed): [91, 114, 771, 503]
|
carlosdanielhernandezmena/stt_fo_quartznet15x5_sp_ep163_100h | carlosdanielhernandezmena | "2023-10-23T22:46:46Z" | 6 | 0 | nemo | [
"nemo",
"automatic-speech-recognition",
"speech",
"audio",
"CTC",
"pytorch",
"NeMo",
"QuartzNet",
"QuartzNet15x5",
"faroese",
"faroe islands",
"fo",
"dataset:carlosdanielhernandezmena/ravnursson_asr",
"license:cc-by-4.0",
"model-index",
"region:us"
] | automatic-speech-recognition | "2022-11-28T10:50:49Z" | ---
language:
- fo
library_name: nemo
datasets:
- carlosdanielhernandezmena/ravnursson_asr
thumbnail: null
tags:
- automatic-speech-recognition
- speech
- audio
- CTC
- pytorch
- NeMo
- QuartzNet
- QuartzNet15x5
- faroese
- faroe islands
license: cc-by-4.0
model-index:
- name: stt_fo_quartznet15x5_sp_ep163_100h
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Ravnursson Corpus (Test)
type: carlosdanielhernandezmena/ravnursson_asr
split: test
args:
language: fo
metrics:
- name: WER
type: wer
value: 22.81
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Ravnursson Corpus (Dev)
type: carlosdanielhernandezmena/ravnursson_asr
split: validation
args:
language: is
metrics:
- name: WER
type: wer
value: 20.51
---
# stt_fo_quartznet15x5_sp_ep163_100h
**Paper:** [ASR Language Resources for Faroese](https://aclanthology.org/2023.nodalida-1.4.pdf)
**NOTE! This model was trained with the NeMo version: nemo-toolkit==1.10.0**
The "stt_fo_quartznet15x5_sp_ep163_100h" is an acoustic model created with NeMo which is suitable for Automatic Speech Recognition in Faroese.
It is the result of fine-tuning the model ["QuartzNet15x5Base-En.nemo"](https://catalog.ngc.nvidia.com/orgs/nvidia/models/nemospeechmodels/files) with 100 hours of Faroese data developed by the [Ravnur Project](https://maltokni.fo/en/the-ravnur-project) from the Faroe Islands and curated by Carlos Mena during 2022. Most of the data is available at public repositories such as [Clarin.is](http://hdl.handle.net/20.500.12537/276) or [Hugging Face](https://huggingface.co/datasets/carlosdanielhernandezmena/ravnursson_asr).
The specific corpus used to fine-tune the model is:
- [The Ravnursson Corpus: Faroese Speech and Transcripts (100h34m)](http://hdl.handle.net/20.500.12537/276)
The fine-tuning process was perform during November (2022) in the servers of the [Language and Voice Laboratory](https://lvl.ru.is/) at [Reykjavík University](https://en.ru.is/) (Iceland) by Carlos Daniel Hernández Mena.
```bibtex
@misc{mena2022quartznet15x5faroese,
title={Acoustic Model in Faroese: stt\_fo\_quartznet15x5\_sp\_ep163\_100h.},
author={Hernandez Mena, Carlos Daniel},
url={https://huggingface.co/carlosdanielhernandezmena/stt_fo_quartznet15x5_sp_ep163_100h},
year={2022}
}
```
# Acknowledgements
Special thanks to Jón Guðnason, head of the Language and Voice Lab for providing computational power to make this model possible. We also want to thank to the "Language Technology Programme for Icelandic 2019-2023" which is managed and coordinated by Almannarómur, and it is funded by the Icelandic Ministry of Education, Science and Culture.
|
migueldeguzmandev/GPT2XL_RLLMv10-1 | migueldeguzmandev | "2025-01-28T18:31:56Z" | 72 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-11T07:26:38Z" | ---
license: mit
---
[Results: RLLMv10 Experiment](https://www.lesswrong.com/posts/x5ySDLEsJdtdmR7nX/rllmv10-experiment)
[More info? see RLLM virtual map!](https://whimsical.com/rllm-visual-map-QQvFHNr6aVDdXRUnyb5NCu) |
JiaxiJiang/textual_inversion_clock | JiaxiJiang | "2024-03-22T08:17:14Z" | 36 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"diffusers-training",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-03-22T07:52:45Z" | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
- diffusers-training
base_model: runwayml/stable-diffusion-v1-5
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Textual inversion text2image fine-tuning - JiaxiJiang/textual_inversion_clock
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
lesso08/cae51ed9-aaac-46f0-9869-da6471e921e4 | lesso08 | "2025-03-16T11:19:22Z" | 5 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-1.5B",
"base_model:adapter:unsloth/Qwen2.5-1.5B",
"license:apache-2.0",
"region:us"
] | null | "2025-03-14T14:40:47Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: cae51ed9-aaac-46f0-9869-da6471e921e4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# cae51ed9-aaac-46f0-9869-da6471e921e4
This model is a fine-tuned version of [unsloth/Qwen2.5-1.5B](https://huggingface.co/unsloth/Qwen2.5-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5945
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000208
- train_batch_size: 4
- eval_batch_size: 4
- seed: 80
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0018 | 1 | 3.1986 |
| 2.5881 | 0.8772 | 500 | 2.5846 |
| 2.586 | 1.7544 | 1000 | 2.5870 |
| 2.5845 | 2.6316 | 1500 | 2.5870 |
| 2.5783 | 3.5088 | 2000 | 2.5945 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Azurro/APT3-1B-Instruct-v1 | Azurro | "2024-09-30T12:14:43Z" | 74 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ALLaMo",
"finetuned",
"pl",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-25T19:20:37Z" | ---
license: cc-by-nc-4.0
language:
- pl
library_name: transformers
pipeline_tag: text-generation
tags:
- llama
- ALLaMo
- finetuned
inference: false
---
# APT3-1B-Instruct-v1
The APT3-1B-Instruct-v1 Large Language Model (LLM) is an instruct fine-tuned version of the [APT3-1B-Base](https://huggingface.co/Azurro/APT3-1B-Base) generative text model.
## Introduction
At [Azurro](https://azurro.pl), we consistently place importance on using the Open Source technologies, both while working on the projects and in our everyday lives. We have decided to share a base language model trained by us. We are confident that smaller language models have great potential, and direct access to them for all people that are interested in such models democratizes this significant and dynamically changing field even more.
## Statements
Training large language models requires a lot of computing power and it is meant for the major players on the market. However, does it mean that individuals or small companies cannot train language models capable of performing specific tasks? We decided to answer this question and train our own language model from scratch.
We have made the following statements:
* we use 1 consumer graphic card
* we train the model only with the Polish corpus
* we use manually selected, high quality texts for training the model.
Why have we made such statements?
It is worth noting that training a model requires several times more resources than using it. To put it simply, it can be assumed that it is about 3-4 times more. Therefore, if a model can be run with a graphic card that has 6 GB VRAM, then training this model requires about 24 GB VRAM (this is the minimum value).
Many consumer computers are equipped with good quality graphic cards that can be used for training a model at one’s own home. This is why we have decided to use a top consumer graphic card - Nvidia’s RTX 4090 24GB VRAM.
All the currently available language models have been trained mainly with English corpora with a little bit of other languages, including Polish. The effect is that these models are not the best at dealing with the Polish texts. Even the popular GPT models from OpenAI and Bard from Google often have issues with correct forms. Therefore, we have decided to prepare a model based only on the Polish corpus. An additional advantage of using only the Polish corpus is the size of the model - it is better to focus on one language in the case of smaller models.
It is important to remember that models are only as good as the data with which they are trained. Given the small size of the model, we trained it with carefully selected texts and instructions. With close collaboration and advice from the [Speakleash](https://speakleash.org) team, our team has prepared over 285 GB of Polish language text corpus and 2.5 million instructions that have then been processed and used for training the model. Additionally, the unique feature of our model is that it has been trained on the largest amount of text among all available models for the Polish language.
## Model
APT3-1B-Instruct-v1 has been trained and fine-tuned with the use of an original open source framework called [ALLaMo](https://github.com/chrisociepa/allamo). This framework allows the user to train and fine-tune language models similar to the Meta AI’s LLaMA models quickly and efficiently.
APT3-1B-Instruct-v1 is an autoregressive language model based on the architecture of a transformer. It has been fine-tuned with 2.5 million instructions, over two epochs, on over 1 billion tokens in total.
The training dataset (instructions in Polish) was created by combining 1.2 million instructions from [Speakleash](https://speakleash.org) and 1.3 million of our private instructions.
### Model description:
* **Developed by:** [Azurro](https://azurro.pl)
* **Language:** Polish
* **Model type:** causal decoder-only
* **License:** CC BY NC 4.0 (non-commercial use)
<p align="center">
<img src="https://huggingface.co/Azurro/APT3-1B-Instruct-v1/raw/main/apt3-1b-instruct-sft.jpg">
</p>
## Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should start with the beginning of a sentence token. The generatated completion will be finished by the end-of-sentence token.
E.g.
```
prompt = "<s>[INST] Jakie mamy pory roku? [/INST]"
completion = "W Polsce mamy 4 pory roku: wiosna, lato, jesień i zima.</s>"
```
### Quickstart
This model can be easily loaded using the AutoModelForCausalLM functionality.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "Azurro/APT3-1B-Instruct-v1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
```
In order to reduce the memory usage, you can use smaller precision (`bfloat16`).
```python
import torch
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16)
```
And then you can use Hugging Face Pipelines to generate text:
```python
import transformers
prompt = "<s>[INST] Jakie mamy pory roku? [/INST]"
pipeline = transformers.pipeline("text-generation", model=model, tokenizer=tokenizer)
sequences = pipeline(max_new_tokens=100, do_sample=True, top_k=50, eos_token_id=tokenizer.eos_token_id, text_inputs=prompt)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
Generated output:
`<s>[INST] Jakie mamy pory roku? [/INST] W Polsce mamy 4 pory roku: wiosna, lato, jesień i zima.</s>`
## Limitations and Biases
APT3-1B-Instruct-v1 model is a quick demonstration showing that the base model can be easily fine-tuned to achieve desired performance. It does not have any moderation mechanisms. It should not be used for human-facing interactions without further guardrails and user consent.
APT3-1B-Instruct-v1 can produce factually incorrect output, and should not be relied on to produce factually accurate information. APT3-1B-Base and APT3-1B-Instruct-v1 were trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that these models could generate lewd, biased or otherwise offensive outputs.
## License
Because of an unclear legal situation, we have decided to publish the model under CC BY NC 4.0 license - it allows for non-commercial use. The model can be used for scientific purposes and privately, as long as the license conditions are met.
## Disclaimer
The license of this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model.
## Citation
Please cite this model using the following format:
```
@online{AzurroAPT3Base1B,
author = {Ociepa, Krzysztof and {Azurro Team}},
title = {Introducing APT3-1B-Base: Polish Language Model},
year = {2024},
url = {https://azurro.pl/apt3-1b-base-en},
note = {Accessed: 2024-01-04}, % change this date
urldate = {2024-01-04} % change this date
}
```
## Special thanks
We would like to especially thank the [Speakleash](https://speakleash.org) team for collecting and sharing texts and instructions in Polish, and for the support we could always count on while preparing the training set for our models. Without you, it would not have been possible to train this model. Thank you!
## The Azurro Team
Please find more information on the Azurro [homepage](https://azurro.pl).
## Contact Us
If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, drop an email to [[email protected]](mailto:[email protected]).
|
DifeiT/text_classification_model | DifeiT | "2024-02-07T18:16:35Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:dmis-lab/biobert-v1.1",
"base_model:finetune:dmis-lab/biobert-v1.1",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-02-07T17:52:09Z" | ---
base_model: dmis-lab/biobert-v1.1
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: text_classification_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text_classification_model
This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5013
- Accuracy: 0.8046
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 22 | 0.5339 | 0.7586 |
| No log | 2.0 | 44 | 0.5013 | 0.8046 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu118
- Datasets 2.16.1
- Tokenizers 0.15.1
|
phungkhaccuong/936335af-edaf-497a-b3a3-b161dee99bf6 | phungkhaccuong | "2025-01-15T12:53:31Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"phi",
"axolotl",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | "2025-01-15T12:45:42Z" | ---
library_name: peft
license: mit
base_model: microsoft/phi-2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 936335af-edaf-497a-b3a3-b161dee99bf6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/phi-2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 590fd4cbceee3791_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/590fd4cbceee3791_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: phungkhaccuong/936335af-edaf-497a-b3a3-b161dee99bf6
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/590fd4cbceee3791_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 78bbc8a0-78c1-4557-a1dd-2fa1b271760f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 78bbc8a0-78c1-4557-a1dd-2fa1b271760f
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 936335af-edaf-497a-b3a3-b161dee99bf6
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8898
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0006 | 1 | 4.7484 |
| 4.155 | 0.0057 | 10 | 4.6259 |
| 3.2389 | 0.0114 | 20 | 3.8550 |
| 2.9115 | 0.0171 | 30 | 3.1712 |
| 2.5033 | 0.0228 | 40 | 2.9412 |
| 2.6088 | 0.0284 | 50 | 2.8898 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Kort/xf3 | Kort | "2025-03-03T15:47:23Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-03T14:52:02Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
habulaj/1711934317908x997337708213858600 | habulaj | "2024-04-01T02:06:46Z" | 1 | 0 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"dataset:RickGrimes001/t-shirt",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | "2024-04-01T01:18:48Z" |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: in the style of TOK
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: false
datasets:
- RickGrimes001/t-shirt
---
# LoRA DreamBooth - squaadinc/1711934317908x997337708213858600
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0 trained on @fffiloni's SD-XL trainer.
The weights were trained on the concept prompt:
```
in the style of TOK
```
Use this keyword to trigger your custom model in your prompts.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Usage
Make sure to upgrade diffusers to >= 0.19.0:
```
pip install diffusers --upgrade
```
In addition make sure to install transformers, safetensors, accelerate as well as the invisible watermark:
```
pip install invisible_watermark transformers accelerate safetensors
```
To just use the base model, you can run:
```python
import torch
from diffusers import DiffusionPipeline, AutoencoderKL
device = "cuda" if torch.cuda.is_available() else "cpu"
vae = AutoencoderKL.from_pretrained('madebyollin/sdxl-vae-fp16-fix', torch_dtype=torch.float16)
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
vae=vae, torch_dtype=torch.float16, variant="fp16",
use_safetensors=True
)
pipe.to(device)
# This is where you load your trained weights
specific_safetensors = "pytorch_lora_weights.safetensors"
lora_scale = 0.9
pipe.load_lora_weights(
'squaadinc/1711934317908x997337708213858600',
weight_name = specific_safetensors,
# use_auth_token = True
)
prompt = "A majestic in the style of TOK jumping from a big stone at night"
image = pipe(
prompt=prompt,
num_inference_steps=50,
cross_attention_kwargs={"scale": lora_scale}
).images[0]
```
|
huggingtweets/musicalmushr00m | huggingtweets | "2021-07-01T06:48:54Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
language: en
thumbnail: https://www.huggingtweets.com/musicalmushr00m/1625122113002/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1351886412895850499/wqwtu4Np_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">mushr00m</div>
<div style="text-align: center; font-size: 14px;">@musicalmushr00m</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from mushr00m.
| Data | mushr00m |
| --- | --- |
| Tweets downloaded | 161 |
| Retweets | 50 |
| Short tweets | 33 |
| Tweets kept | 78 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/26j7t29j/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @musicalmushr00m's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/lxo37ttz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/lxo37ttz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/musicalmushr00m')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
tencent-community/Hunyuan-A52B-Instruct-FP8 | tencent-community | "2024-11-05T23:35:19Z" | 47 | 1 | transformers | [
"transformers",
"safetensors",
"hunyuan",
"text-generation",
"conversational",
"custom_code",
"en",
"arxiv:2411.02265",
"autotrain_compatible",
"fp8",
"region:us"
] | text-generation | "2024-11-05T13:33:28Z" | ---
license_link: https://huggingface.co/tencent/Tencent-Hunyuan-Large/blob/main/LICENSE.txt
language:
- en
pipeline_tag: text-generation
library_name: transformers
---
The original repo is here: https://huggingface.co/tencent/Tencent-Hunyuan-Large
This is the Hunyuan-A52B-Instruct-FP8 model uploaded into its own repository.
<p align="center">
<img src="https://dscache.tencent-cloud.cn/upload/uploader/hunyuan-64b418fd052c033b228e04bc77bbc4b54fd7f5bc.png" width="400"/> <br>
</p><p></p>
### Model Introduction
With the rapid development of artificial intelligence technology, large language models (LLMs) have made significant progress in fields such as natural language processing, computer vision, and scientific tasks. However, as the scale of these models increases, optimizing resource consumption while maintaining high performance has become a key challenge. To address this challenge, we have explored Mixture of Experts (MoE) models. The currently unveiled Hunyuan-Large (Hunyuan-MoE-A52B) model is the largest open-source Transformer-based MoE model in the industry, featuring a total of 389 billion parameters and 52 billion active parameters. This is currently the largest open-source Transformer-based MoE model in the industry, featuring a total of 389 billion parameters and 52 billion active parameters.
By open-sourcing the Hunyuan-Large model and revealing related technical details, we hope to inspire more researchers with innovative ideas and collectively advance the progress and application of AI technology. We welcome you to join our open-source community to explore and optimize future AI models together!
### Introduction to Model Technical Advantages
#### Model
- **High-Quality Synthetic Data**: By enhancing training with synthetic data, Hunyuan-Large can learn richer representations, handle long-context inputs, and generalize better to unseen data.
- **KV Cache Compression**: Utilizes Grouped Query Attention (GQA) and Cross-Layer Attention (CLA) strategies to significantly reduce memory usage and computational overhead of KV caches, improving inference throughput.
- **Expert-Specific Learning Rate Scaling**: Sets different learning rates for different experts to ensure each sub-model effectively learns from the data and contributes to overall performance.
- **Long-Context Processing Capability**: The pre-trained model supports text sequences up to 256K, and the Instruct model supports up to 128K, significantly enhancing the ability to handle long-context tasks.
- **Extensive Benchmarking**: Conducts extensive experiments across various languages and tasks to validate the practical effectiveness and safety of Hunyuan-Large.
## Benchmark Evaluation
**Hunyuan-Large pre-trained model** achieves the best overall performance compared to both Dense and MoE based
competitors having similar activated parameter sizes. For aggregated benchmarks such as MMLU, MMLU-Pro, and CMMLU,
Hunyuan-Large consistently achieves the best performance, confirming its comprehensive abilities on aggregated tasks.
Hunyuan-Large also shows superior performance in commonsense understanding and reasoning, and classical NLP tasks
such as QA and reading comprehension tasks (e.g., CommonsenseQA, PIQA and TriviaQA).
For the mathematics capability, Hunyuan-Large outperforms all baselines in math datasets of GSM8K and MATH,
and also gains the best results on CMATH in Chinese.We also observe that Hunyuan-Large achieves the overall
best performance in all Chinese tasks (e.g., CMMLU, C-Eval).
| Model | LLama3.1-405B | LLama3.1-70B | Mixtral-8x22B | DeepSeek-V2 | Hunyuan-Large |
|------------------|---------------|--------------|---------------|-------------|---------------|
| MMLU | 85.2 | 79.3 | 77.8 | 78.5 | **88.4** |
| MMLU-Pro | **61.6** | 53.8 | 49.5 | - | 60.2 |
| BBH | 85.9 | 81.6 | 78.9 | 78.9 | **86.3** |
| HellaSwag | - | - | **88.7** | 87.8 | 86.8 |
| CommonsenseQA | 85.8 | 84.1 | 82.4 | - | **92.9** |
| WinoGrande | 86.7 | 85.3 | 85.0 | 84.9 | **88.7** |
| PIQA | - | - | 83.6 | 83.7 | **88.3** |
| NaturalQuestions | - | - | 39.6 | 38.7 | **52.8** |
| DROP | 84.8 | 79.6 | 80.4 | 80.1 | **88.9** |
| ARC-C | **96.1** | 92.9 | 91.2 | 92.4 | 95.0 |
| TriviaQA | - | - | 82.1 | 79.9 | **89.2** |
| CMMLU | - | - | 60.0 | 84.0 | **90.2** |
| C-Eval | - | - | 59.6 | 81.7 | **91.9** |
| C3 | - | - | 71.4 | 77.4 | **82.3** |
| GSM8K | 89.0 | 83.7 | 83.7 | 79.2 | **92.8** |
| MATH | 53.8 | 41.4 | 42.5 | 43.6 | **69.8** |
| CMATH | - | - | 72.3 | 78.7 | **91.3** |
| HumanEval | 61.0 | 58.5 | 53.1 | 48.8 | **71.4** |
| MBPP | **73.4** | 68.6 | 64.2 | 66.6 | 72.6 |
**Hunyuan-Large-Instruct** achieves consistent improvements on most types of tasks compared to LLMs having similar
activated parameters, indicating the effectiveness of our post-training. Delving into the model performance
in different categories of benchmarks, we find that our instruct model achieves the best performance on MMLU and MATH dataset.
Notably, on the MMLU dataset, our model demonstrates a significant improvement, outperforming the LLama3.1-405B model by 2.6%.
This enhancement is not just marginal but indicative of the Hunyuan-Large-Instruct’s superior understanding and reasoning
capabilities across a wide array of language understanding tasks. The model’s prowess is further underscored in its performance
on the MATH dataset, where it surpasses the LLama3.1-405B by a notable margin of 3.6%.
Remarkably, this leap in accuracy is achieved with only 52 billion activated parameters, underscoring the efficiency of our model.
| Model | LLama3.1 405B Inst. | LLama3.1 70B Inst. | Mixtral 8x22B Inst. | DeepSeekV2.5 Chat | Hunyuan-Large Inst. |
|----------------------|---------------------|--------------------|---------------------|-------------------|---------------------|
| MMLU | 87.3 | 83.6 | 77.8 | 80.4 | **89.9** |
| CMMLU | - | - | 61.0 | - | **90.4** |
| C-Eval | - | - | 60.0 | - | **88.6** |
| BBH | - | - | 78.4 | 84.3 | **89.5** |
| HellaSwag | - | - | 86.0 | **90.3** | 88.5 |
| ARC-C | **96.9** | 94.8 | 90.0 | - | 94.6 |
| GPQA_diamond | **51.1** | 46.7 | - | - | 42.4 |
| MATH | 73.8 | 68.0 | 49.8 | 74.7 | **77.4** |
| HumanEval | 89.0 | 80.5 | 75.0 | 89.0 | **90.0** |
| AlignBench | 6.0 | 5.9 | 6.2 | 8.0 | **8.3** |
| MT-Bench | 9.1 | 8.8 | 8.1 | 9.0 | **9.4** |
| IFEval strict-prompt | **86.0** | 83.6 | 71.2 | - | 85.0 |
| Arena-Hard | 69.3 | 55.7 | - | 76.2 | **81.8** |
| AlpacaEval-2.0 | 39.3 | 34.3 | 30.9 | 50.5 | **51.8** |
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{sun2024hunyuanlargeopensourcemoemodel,
title={Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent},
author={Xingwu Sun and Yanfeng Chen and Yiqing Huang and Ruobing Xie and Jiaqi Zhu and Kai Zhang and Shuaipeng Li and Zhen Yang and Jonny Han and Xiaobo Shu and Jiahao Bu and Zhongzhi Chen and Xuemeng Huang and Fengzong Lian and Saiyong Yang and Jianfeng Yan and Yuyuan Zeng and Xiaoqin Ren and Chao Yu and Lulu Wu and Yue Mao and Tao Yang and Suncong Zheng and Kan Wu and Dian Jiao and Jinbao Xue and Xipeng Zhang and Decheng Wu and Kai Liu and Dengpeng Wu and Guanghui Xu and Shaohua Chen and Shuang Chen and Xiao Feng and Yigeng Hong and Junqiang Zheng and Chengcheng Xu and Zongwei Li and Xiong Kuang and Jianglu Hu and Yiqi Chen and Yuchi Deng and Guiyang Li and Ao Liu and Chenchen Zhang and Shihui Hu and Zilong Zhao and Zifan Wu and Yao Ding and Weichao Wang and Han Liu and Roberts Wang and Hao Fei and Peijie She and Ze Zhao and Xun Cao and Hai Wang and Fusheng Xiang and Mengyuan Huang and Zhiyuan Xiong and Bin Hu and Xuebin Hou and Lei Jiang and Jiajia Wu and Yaping Deng and Yi Shen and Qian Wang and Weijie Liu and Jie Liu and Meng Chen and Liang Dong and Weiwen Jia and Hu Chen and Feifei Liu and Rui Yuan and Huilin Xu and Zhenxiang Yan and Tengfei Cao and Zhichao Hu and Xinhua Feng and Dong Du and Tinghao She and Yangyu Tao and Feng Zhang and Jianchen Zhu and Chengzhong Xu and Xirui Li and Chong Zha and Wen Ouyang and Yinben Xia and Xiang Li and Zekun He and Rongpeng Chen and Jiawei Song and Ruibin Chen and Fan Jiang and Chongqing Zhao and Bo Wang and Hao Gong and Rong Gan and Winston Hu and Zhanhui Kang and Yong Yang and Yuhong Liu and Di Wang and Jie Jiang},
year={2024},
eprint={2411.02265},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2411.02265},
}
```
|
ohdwq/123 | ohdwq | "2025-02-23T09:37:40Z" | 0 | 0 | null | [
"license:artistic-2.0",
"region:us"
] | null | "2025-02-23T09:37:40Z" | ---
license: artistic-2.0
---
|
mradermacher/NeuralGemma2-2b-Spanish-GGUF | mradermacher | "2024-09-06T10:45:12Z" | 24 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"google/gemma-2-2b-it",
"Kukedlc/Gemma-2-2B-Spanish-1.0",
"es",
"dataset:Kukedlc/Big-Spanish-1.2M",
"base_model:Kukedlc/NeuralGemma2-2b-Spanish",
"base_model:quantized:Kukedlc/NeuralGemma2-2b-Spanish",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-09-06T10:07:45Z" | ---
base_model: Kukedlc/NeuralGemma2-2b-Spanish
datasets:
- Kukedlc/Big-Spanish-1.2M
language:
- es
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- google/gemma-2-2b-it
- Kukedlc/Gemma-2-2B-Spanish-1.0
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Kukedlc/NeuralGemma2-2b-Spanish
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/NeuralGemma2-2b-Spanish-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NeuralGemma2-2b-Spanish-GGUF/resolve/main/NeuralGemma2-2b-Spanish.Q2_K.gguf) | Q2_K | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralGemma2-2b-Spanish-GGUF/resolve/main/NeuralGemma2-2b-Spanish.IQ3_XS.gguf) | IQ3_XS | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralGemma2-2b-Spanish-GGUF/resolve/main/NeuralGemma2-2b-Spanish.IQ3_S.gguf) | IQ3_S | 1.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/NeuralGemma2-2b-Spanish-GGUF/resolve/main/NeuralGemma2-2b-Spanish.Q3_K_S.gguf) | Q3_K_S | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralGemma2-2b-Spanish-GGUF/resolve/main/NeuralGemma2-2b-Spanish.IQ3_M.gguf) | IQ3_M | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralGemma2-2b-Spanish-GGUF/resolve/main/NeuralGemma2-2b-Spanish.Q3_K_M.gguf) | Q3_K_M | 1.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralGemma2-2b-Spanish-GGUF/resolve/main/NeuralGemma2-2b-Spanish.Q3_K_L.gguf) | Q3_K_L | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralGemma2-2b-Spanish-GGUF/resolve/main/NeuralGemma2-2b-Spanish.IQ4_XS.gguf) | IQ4_XS | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralGemma2-2b-Spanish-GGUF/resolve/main/NeuralGemma2-2b-Spanish.Q4_K_S.gguf) | Q4_K_S | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeuralGemma2-2b-Spanish-GGUF/resolve/main/NeuralGemma2-2b-Spanish.Q4_K_M.gguf) | Q4_K_M | 1.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeuralGemma2-2b-Spanish-GGUF/resolve/main/NeuralGemma2-2b-Spanish.Q5_K_S.gguf) | Q5_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralGemma2-2b-Spanish-GGUF/resolve/main/NeuralGemma2-2b-Spanish.Q5_K_M.gguf) | Q5_K_M | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralGemma2-2b-Spanish-GGUF/resolve/main/NeuralGemma2-2b-Spanish.Q6_K.gguf) | Q6_K | 2.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralGemma2-2b-Spanish-GGUF/resolve/main/NeuralGemma2-2b-Spanish.Q8_0.gguf) | Q8_0 | 2.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Llama-3-Smolphin-8b-GGUF | mradermacher | "2024-05-05T15:12:25Z" | 54 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:EryriLabs/Llama-3-Smolphin-8b",
"base_model:quantized:EryriLabs/Llama-3-Smolphin-8b",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-04-24T09:04:36Z" | ---
base_model: EryriLabs/Llama-3-Smolphin-8b
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/EryriLabs/Llama-3-Smolphin-8b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Smolphin-8b-GGUF/resolve/main/Llama-3-Smolphin-8b.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Smolphin-8b-GGUF/resolve/main/Llama-3-Smolphin-8b.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Smolphin-8b-GGUF/resolve/main/Llama-3-Smolphin-8b.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Smolphin-8b-GGUF/resolve/main/Llama-3-Smolphin-8b.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Smolphin-8b-GGUF/resolve/main/Llama-3-Smolphin-8b.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Smolphin-8b-GGUF/resolve/main/Llama-3-Smolphin-8b.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Smolphin-8b-GGUF/resolve/main/Llama-3-Smolphin-8b.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Smolphin-8b-GGUF/resolve/main/Llama-3-Smolphin-8b.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Smolphin-8b-GGUF/resolve/main/Llama-3-Smolphin-8b.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Smolphin-8b-GGUF/resolve/main/Llama-3-Smolphin-8b.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Smolphin-8b-GGUF/resolve/main/Llama-3-Smolphin-8b.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Smolphin-8b-GGUF/resolve/main/Llama-3-Smolphin-8b.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Smolphin-8b-GGUF/resolve/main/Llama-3-Smolphin-8b.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Smolphin-8b-GGUF/resolve/main/Llama-3-Smolphin-8b.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Smolphin-8b-GGUF/resolve/main/Llama-3-Smolphin-8b.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
diffbot/Llama-3.1-Diffbot-Small-2412 | diffbot | "2025-01-08T01:38:12Z" | 83,392 | 5 | null | [
"pytorch",
"llama",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"region:us"
] | null | "2024-12-30T18:10:56Z" | ---
license: llama3.1
base_model:
- meta-llama/Llama-3.1-8B-Instruct
--- |
SzegedAI/100M_deberta-base_seed0_KD_lf_0.15_bert-base-cased_mlm_0.0_cp_25000 | SzegedAI | "2024-07-22T18:58:32Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"deberta",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2024-07-22T17:40:10Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hfl/cino-large-v2 | hfl | "2022-01-24T10:40:50Z" | 13 | 11 | transformers | [
"transformers",
"pytorch",
"tf",
"xlm-roberta",
"fill-mask",
"zh",
"bo",
"kk",
"ko",
"mn",
"ug",
"yue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-03-02T23:29:05Z" | ---
language:
- zh
- bo
- kk
- ko
- mn
- ug
- yue
license: "apache-2.0"
---
## CINO: Pre-trained Language Models for Chinese Minority Languages(中国少数民族预训练模型)
Multilingual Pre-trained Language Model, such as mBERT, XLM-R, provide multilingual and cross-lingual ability for language understanding.
We have seen rapid progress on building multilingual PLMs in recent year.
However, there is a lack of contributions on building PLMs on Chines minority languages, which hinders researchers from building powerful NLP systems.
To address the absence of Chinese minority PLMs, Joint Laboratory of HIT and iFLYTEK Research (HFL) proposes CINO (Chinese-miNOrity pre-trained language model), which is built on XLM-R with additional pre-training using Chinese minority corpus, such as
- Chinese,中文(zh)
- Tibetan,藏语(bo)
- Mongolian (Uighur form),蒙语(mn)
- Uyghur,维吾尔语(ug)
- Kazakh (Arabic form),哈萨克语(kk)
- Korean,朝鲜语(ko)
- Zhuang,壮语
- Cantonese,粤语(yue)
Please read our GitHub repository for more details (Chinese): https://github.com/ymcui/Chinese-Minority-PLM
You may also interested in,
Chinese MacBERT: https://github.com/ymcui/MacBERT
Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
|
Nyanmaru/TS_LoRA_sd3_general-warning-sign | Nyanmaru | "2025-03-22T18:04:35Z" | 1 | 1 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"sd3",
"sd3-diffusers",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-3-medium-diffusers",
"base_model:adapter:stabilityai/stable-diffusion-3-medium-diffusers",
"license:openrail++",
"region:us"
] | text-to-image | "2024-11-24T13:02:59Z" | ---
base_model: stabilityai/stable-diffusion-3-medium-diffusers
library_name: diffusers
license: openrail++
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- sd3
- sd3-diffusers
- template:sd-lora
- text-to-image
- diffusers-training
- diffusers
- lora
- sd3
- sd3-diffusers
- template:sd-lora
instance_prompt: 'a photo of TOK traffic sign '
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SD3 DreamBooth LoRA - Nyanmaru/TS_LoRA_sd3_general-warning-sign
<Gallery />
## Model description
These are Nyanmaru/TS_LoRA_sd3_general-warning-sign DreamBooth LoRA weights for stabilityai/stable-diffusion-3-medium-diffusers.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [SD3 diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_sd3.md).
Was LoRA for the text encoder enabled? False.
## Trigger words
You should use `a photo of TOK traffic sign ` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](Nyanmaru/TS_LoRA_sd3_general-warning-sign/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-3-medium-diffusers', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Nyanmaru/TS_LoRA_sd3_general-warning-sign', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('a photo of TOK traffic sign ').images[0]
```
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`diffusers_lora_weights.safetensors` here 💾](/Nyanmaru/TS_LoRA_sd3_general-warning-sign/blob/main/diffusers_lora_weights.safetensors)**.
- Rename it and place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:your_new_name:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
kevgeniygifjpgxyz/Qwen2.5-Coder-7B-Instruct-bnb-16bit-v2 | kevgeniygifjpgxyz | "2024-12-20T19:18:06Z" | 137 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"unsloth",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-12-20T18:57:30Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nikonawt/Hermes-3-Llama-3.2-3B-Q6_K-GGUF | nikonawt | "2025-02-16T08:05:13Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"Llama-3",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"distillation",
"function calling",
"json mode",
"axolotl",
"roleplaying",
"chat",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:NousResearch/Hermes-3-Llama-3.2-3B",
"base_model:quantized:NousResearch/Hermes-3-Llama-3.2-3B",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-16T08:04:59Z" | ---
language:
- en
license: llama3
tags:
- Llama-3
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
- function calling
- json mode
- axolotl
- roleplaying
- chat
- llama-cpp
- gguf-my-repo
base_model: NousResearch/Hermes-3-Llama-3.2-3B
widget:
- example_title: Hermes 3
messages:
- role: system
content: You are a sentient, superintelligent artificial general intelligence,
here to teach and assist me.
- role: user
content: Write a short story about Goku discovering kirby has teamed up with Majin
Buu to destroy the world.
library_name: transformers
model-index:
- name: Hermes-3-Llama-3.2-3B
results: []
---
# nikonawt/Hermes-3-Llama-3.2-3B-Q6_K-GGUF
This model was converted to GGUF format from [`NousResearch/Hermes-3-Llama-3.2-3B`](https://huggingface.co/NousResearch/Hermes-3-Llama-3.2-3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/NousResearch/Hermes-3-Llama-3.2-3B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo nikonawt/Hermes-3-Llama-3.2-3B-Q6_K-GGUF --hf-file hermes-3-llama-3.2-3b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo nikonawt/Hermes-3-Llama-3.2-3B-Q6_K-GGUF --hf-file hermes-3-llama-3.2-3b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo nikonawt/Hermes-3-Llama-3.2-3B-Q6_K-GGUF --hf-file hermes-3-llama-3.2-3b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo nikonawt/Hermes-3-Llama-3.2-3B-Q6_K-GGUF --hf-file hermes-3-llama-3.2-3b-q6_k.gguf -c 2048
```
|
Hanhpt23/whisper-small-silvarmed | Hanhpt23 | "2025-02-07T20:18:27Z" | 5 | 0 | null | [
"safetensors",
"whisper",
"generated_from_trainer",
"en",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"region:us"
] | null | "2025-02-06T18:32:19Z" | ---
language:
- en
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: openai/whisper-small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-small
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Hanhpt23/SilvarMed dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1782
- Wer: 6.0842
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.0747 | 1.0 | 2438 | 0.2071 | 8.9008 |
| 0.0385 | 2.0 | 4876 | 0.2154 | 9.5935 |
| 0.0434 | 3.0 | 7314 | 0.1894 | 4.9993 |
| 0.0175 | 4.0 | 9752 | 0.2119 | 5.9665 |
| 0.0226 | 5.0 | 12190 | 0.1965 | 4.6334 |
| 0.0089 | 6.0 | 14628 | 0.2144 | 5.1954 |
| 0.0195 | 7.0 | 17066 | 0.2112 | 4.9536 |
| 0.0053 | 8.0 | 19504 | 0.1983 | 5.7313 |
| 0.0053 | 9.0 | 21942 | 0.2062 | 6.8226 |
| 0.0034 | 10.0 | 24380 | 0.1960 | 6.1822 |
| 0.0107 | 11.0 | 26818 | 0.2028 | 5.8947 |
| 0.0001 | 12.0 | 29256 | 0.2012 | 5.9012 |
| 0.0015 | 13.0 | 31694 | 0.1876 | 4.6138 |
| 0.0 | 14.0 | 34132 | 0.1878 | 6.8096 |
| 0.0032 | 15.0 | 36570 | 0.1962 | 6.0515 |
| 0.0 | 16.0 | 39008 | 0.1846 | 5.9796 |
| 0.0001 | 17.0 | 41446 | 0.1835 | 4.5680 |
| 0.002 | 18.0 | 43884 | 0.1813 | 5.9273 |
| 0.0 | 19.0 | 46322 | 0.1778 | 6.1495 |
| 0.0 | 20.0 | 48760 | 0.1782 | 6.0842 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0
- Datasets 3.2.0
- Tokenizers 0.19.1
|
mradermacher/AuroraGPT-IT-v4-0125-GGUF | mradermacher | "2025-02-06T09:58:04Z" | 238 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:open-phi/textbooks",
"dataset:open-phi/programming_books_llama",
"dataset:openchat/openchat_sharegpt4_dataset",
"dataset:nvidia/ChatQA-Training-Data",
"dataset:jeffmeloy/sonnet3.5_science_conversations",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:teknium/OpenHermes-2.5",
"dataset:openbmb/UltraInteract_sft",
"base_model:argonne-private/AuroraGPT-IT-v4-0125",
"base_model:quantized:argonne-private/AuroraGPT-IT-v4-0125",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-30T07:40:57Z" | ---
base_model: argonne-private/AuroraGPT-IT-v4-0125
datasets:
- open-phi/textbooks
- open-phi/programming_books_llama
- openchat/openchat_sharegpt4_dataset
- nvidia/ChatQA-Training-Data
- jeffmeloy/sonnet3.5_science_conversations
- HuggingFaceH4/ultrachat_200k
- microsoft/orca-math-word-problems-200k
- m-a-p/CodeFeedback-Filtered-Instruction
- teknium/OpenHermes-2.5
- openbmb/UltraInteract_sft
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/argonne-private/AuroraGPT-IT-v4-0125
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/AuroraGPT-IT-v4-0125-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/AuroraGPT-IT-v4-0125-GGUF/resolve/main/AuroraGPT-IT-v4-0125.Q2_K.gguf) | Q2_K | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/AuroraGPT-IT-v4-0125-GGUF/resolve/main/AuroraGPT-IT-v4-0125.Q3_K_S.gguf) | Q3_K_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/AuroraGPT-IT-v4-0125-GGUF/resolve/main/AuroraGPT-IT-v4-0125.Q3_K_M.gguf) | Q3_K_M | 3.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/AuroraGPT-IT-v4-0125-GGUF/resolve/main/AuroraGPT-IT-v4-0125.Q3_K_L.gguf) | Q3_K_L | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/AuroraGPT-IT-v4-0125-GGUF/resolve/main/AuroraGPT-IT-v4-0125.IQ4_XS.gguf) | IQ4_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/AuroraGPT-IT-v4-0125-GGUF/resolve/main/AuroraGPT-IT-v4-0125.Q4_K_S.gguf) | Q4_K_S | 3.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AuroraGPT-IT-v4-0125-GGUF/resolve/main/AuroraGPT-IT-v4-0125.Q4_K_M.gguf) | Q4_K_M | 3.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AuroraGPT-IT-v4-0125-GGUF/resolve/main/AuroraGPT-IT-v4-0125.Q5_K_S.gguf) | Q5_K_S | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/AuroraGPT-IT-v4-0125-GGUF/resolve/main/AuroraGPT-IT-v4-0125.Q5_K_M.gguf) | Q5_K_M | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/AuroraGPT-IT-v4-0125-GGUF/resolve/main/AuroraGPT-IT-v4-0125.Q6_K.gguf) | Q6_K | 5.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/AuroraGPT-IT-v4-0125-GGUF/resolve/main/AuroraGPT-IT-v4-0125.Q8_0.gguf) | Q8_0 | 6.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/AuroraGPT-IT-v4-0125-GGUF/resolve/main/AuroraGPT-IT-v4-0125.f16.gguf) | f16 | 12.0 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
SmithCC/Hello_Lora | SmithCC | "2025-03-11T16:23:43Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"grpo",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-11T15:59:24Z" | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- grpo
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** SmithCC
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
beezu/Anubis-70B-v1-MLX-8Bit | beezu | "2025-01-27T14:40:55Z" | 8 | 0 | mlx | [
"mlx",
"safetensors",
"llama",
"base_model:TheDrummer/Anubis-70B-v1",
"base_model:quantized:TheDrummer/Anubis-70B-v1",
"license:other",
"8-bit",
"region:us"
] | null | "2025-01-27T14:13:39Z" | ---
license: other
base_model: TheDrummer/Anubis-70B-v1
tags:
- mlx
---
# beezu/Anubis-70B-v1-MLX-8Bit
The Model [beezu/Anubis-70B-v1-Q8-mlx](https://huggingface.co/beezu/Anubis-70B-v1-Q8-mlx) was converted to MLX format from [TheDrummer/Anubis-70B-v1](https://huggingface.co/TheDrummer/Anubis-70B-v1) using mlx-lm version **0.20.5**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("beezu/Anubis-70B-v1-Q8-mlx")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
broodmother41/e7674d64-b0c7-4569-a846-6ce4d20dca52 | broodmother41 | "2025-02-06T04:20:31Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"opt",
"axolotl",
"generated_from_trainer",
"base_model:facebook/opt-125m",
"base_model:adapter:facebook/opt-125m",
"license:other",
"region:us"
] | null | "2025-02-06T04:11:40Z" | ---
library_name: peft
license: other
base_model: facebook/opt-125m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e7674d64-b0c7-4569-a846-6ce4d20dca52
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: facebook/opt-125m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8b23ad7fe3a30374_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8b23ad7fe3a30374_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: broodmother41/e7674d64-b0c7-4569-a846-6ce4d20dca52
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 1256
micro_batch_size: 4
mlflow_experiment_name: /tmp/8b23ad7fe3a30374_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 9aea0b45-a81a-4aae-bd4f-f237a30e257f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 9aea0b45-a81a-4aae-bd4f-f237a30e257f
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# e7674d64-b0c7-4569-a846-6ce4d20dca52
This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6668
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 1256
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.514 | 0.2626 | 1256 | 0.6668 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
andabi/M5 | andabi | "2025-01-21T16:13:58Z" | 11 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"robotics",
"region:us"
] | robotics | "2025-01-21T16:13:41Z" | ---
library_name: lerobot
tags:
- act
- model_hub_mixin
- pytorch_model_hub_mixin
- robotics
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: https://github.com/huggingface/lerobot
- Docs: [More Information Needed] |
pbalaram/distilbert-base-uncased-finetuned-emotion | pbalaram | "2024-07-17T17:17:26Z" | 119 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-07-16T16:18:22Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9375
- name: F1
type: f1
value: 0.9376802293652775
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1444
- Accuracy: 0.9375
- F1: 0.9377
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.1660 | 0.9305 | 0.9311 |
| No log | 2.0 | 500 | 0.1444 | 0.9375 | 0.9377 |
### Framework versions
- Transformers 4.42.4
- Pytorch 2.3.1
- Datasets 2.20.0
- Tokenizers 0.19.1
|
tiiuae/Falcon3-3B-Instruct | tiiuae | "2025-01-10T06:58:36Z" | 31,816 | 24 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"falcon3",
"conversational",
"en",
"fr",
"es",
"pt",
"base_model:tiiuae/Falcon3-3B-Instruct",
"base_model:finetune:tiiuae/Falcon3-3B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-14T06:03:53Z" | ---
base_model: tiiuae/Falcon3-3B-Instruct
language:
- en
- fr
- es
- pt
library_name: transformers
license: other
license_name: falcon-llm-license
license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html
tags:
- falcon3
---
<div align="center">
<img src="https://huggingface.co/datasets/tiiuae/documentation-images/resolve/main/general/falco3-logo.png" alt="drawing" width="500"/>
</div>
# Falcon3-3B-Instruct
**Falcon3** family of Open Foundation Models is a set of pretrained and instruct LLMs ranging from 1B to 10B parameters.
**Falcon3-3B-Instruct** achieves strong results on reasoning, language understanding, instruction following, code and mathematics tasks.
Falcon3-3B-Instruct supports 4 languages (English, French, Spanish, Portuguese) and a context length of up to 32K.
## Model Details
- Architecture
- Transformer-based causal decoder-only architecture
- 22 decoder blocks
- Grouped Query Attention (GQA) for faster inference: 12 query heads and 4 key-value heads
- Wider head dimension: 256
- High RoPE value to support long context understanding: 1000042
- Uses SwiGLU and RMSNorm
- 32K context length
- 131K vocab size
- Pruned and healed from Falcon3-7B-Base on only 100 Gigatokens of datasets comprising of web, code, STEM, high quality and mutlilingual data using 1024 H100 GPU chips
- Posttrained on 1.2 million samples of STEM, conversational, code, safety and function call data
- Supports EN, FR, ES, PT
- Developed by [Technology Innovation Institute](https://www.tii.ae)
- License: TII Falcon-LLM License 2.0
- Model Release Date: December 2024
## Getting started
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "tiiuae/Falcon3-3B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "How many hours in one day?"
messages = [
{"role": "system", "content": "You are a helpful friendly assistant Falcon3 from TII, try to follow instructions as much as possible."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=1024
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
</details>
<br>
## Benchmarks
We report in the following table our internal pipeline benchmarks.
- We use [lm-evaluation harness](https://github.com/EleutherAI/lm-evaluation-harness).
- We report **raw scores** obtained by applying chat template and fewshot_as_multiturn.
- We use same batch-size across all models.
<table border="1" style="width: 100%; text-align: center; border-collapse: collapse;">
<colgroup>
<col style="width: 10%;">
<col style="width: 10%;">
<col style="width: 7%;">
<col style="width: 7%;">
<col style="width: 7%;">
<col style="background-color: rgba(80, 15, 213, 0.5); width: 7%;">
</colgroup>
<thead>
<tr>
<th>Category</th>
<th>Benchmark</th>
<th>Llama-3.2-3B-Instruct</th>
<th>Qwen2.5-3B-Instruct</th>
<th>Nemotron-Mini-4B-Instruct</th>
<th>Falcon3-3B-Instruct</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="3">General</td>
<td>MMLU (5-shot)</td>
<td>61.2</td>
<td><b>65.4</b></td>
<td>57.3</td>
<td>56.9</td>
</tr>
<tr>
<td>MMLU-PRO (5-shot)</td>
<td>27.7</td>
<td><b>32.6</b></td>
<td>26.0</td>
<td>29.7</td>
</tr>
<tr>
<td>IFEval</td>
<td><b>74.7</b></td>
<td>64.1</td>
<td>66.3</td>
<td>68.3</td>
</tr>
<tr>
<td rowspan="3">Math</td>
<td>GSM8K (5-shot)</td>
<td><b>76.8</b></td>
<td>56.7</td>
<td>29.8</td>
<td>74.8</td>
</tr>
<tr>
<td>GSM8K (8-shot, COT)</td>
<td><b>78.8</b></td>
<td>60.8</td>
<td>35.0</td>
<td>78.0</td>
</tr>
<tr>
<td>MATH Lvl-5 (4-shot)</td>
<td>14.6</td>
<td>0.0</td>
<td>0.0</td>
<td><b>19.9</b></td>
</tr>
<tr>
<td rowspan="5">Reasoning</td>
<td>Arc Challenge (25-shot)</td>
<td>50.9</td>
<td>55.0</td>
<td><b>56.2</b></td>
<td>55.5</td>
</tr>
<tr>
<td>GPQA (0-shot)</td>
<td><b>32.2</b></td>
<td>29.2</td>
<td>27.0</td>
<td>29.6</td>
</tr>
<tr>
<td>GPQA (0-shot, COT)</td>
<td>11.3</td>
<td>11.0</td>
<td>12.2</td>
<td><b>26.5</b></td>
</tr>
<tr>
<td>MUSR (0-shot)</td>
<td>35.0</td>
<td><b>40.2</b></td>
<td>38.7</td>
<td>39.0</td>
</tr>
<tr>
<td>BBH (3-shot)</td>
<td>41.8</td>
<td>44.5</td>
<td>39.5</td>
<td><b>45.4</b></td>
</tr>
<tr>
<td rowspan="4">CommonSense Understanding</td>
<td>PIQA (0-shot)</td>
<td>74.6</td>
<td>73.8</td>
<td>74.6</td>
<td><b>75.6</b></td>
</tr>
<tr>
<td>SciQ (0-shot)</td>
<td>77.2</td>
<td>60.7</td>
<td>71.0</td>
<td><b>95.5</b></td>
</tr>
<tr>
<td>Winogrande (0-shot)</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td><b>65.0</b></td>
</tr>
<tr>
<td>OpenbookQA (0-shot)</td>
<td>40.8</td>
<td>41.2</td>
<td><b>43.2</b></td>
<td>42.2</td>
</tr>
<tr>
<td rowspan="2">Instructions following</td>
<td>MT-Bench (avg)</td>
<td>7.1</td>
<td><b>8.0</b></td>
<td>6.7</td>
<td>7.2</td>
</tr>
<tr>
<td>Alpaca (WC)</td>
<td><b>19.4</b></td>
<td>19.4</td>
<td>9.6</td>
<td>15.5</td>
</tr>
<tr>
<td>Tool use</td>
<td>BFCL AST (avg)</td>
<td><b>85.2</b></td>
<td>84.8</td>
<td>59.8</td>
<td>59.3</td>
</tr>
<tr>
<td rowspan="2">Code</td>
<td>EvalPlus (0-shot) (avg)</td>
<td>55.2</td>
<td><b>69.4<b></td>
<td>40.0</td>
<td>52.9</td>
</tr>
<tr>
<td>Multipl-E (0-shot) (avg)</td>
<td>31.6</td>
<td>29.2</td>
<td>19.6</td>
<td><b>32.9</b></td>
</tr>
</tbody>
</table>
## Useful links
- View our [release blogpost](https://huggingface.co/blog/falcon3).
- Feel free to join [our discord server](https://discord.gg/fwXpMyGc) if you have any questions or to interact with our researchers and developers.
## Technical Report
Coming soon....
## Citation
If the Falcon3 family of models were helpful to your work, feel free to give us a cite.
```
@misc{Falcon3,
title = {The Falcon 3 Family of Open Models},
url = {https://huggingface.co/blog/falcon3},
author = {Falcon-LLM Team},
month = {December},
year = {2024}
}
```
|
adammandic87/a5437567-e5b7-4675-9e06-aff4959be6eb | adammandic87 | "2025-01-30T12:22:47Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-0.5B",
"base_model:adapter:Qwen/Qwen1.5-0.5B",
"license:other",
"region:us"
] | null | "2025-01-30T12:20:30Z" | ---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a5437567-e5b7-4675-9e06-aff4959be6eb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-0.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 11d109720337ba22_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/11d109720337ba22_train_data.json
type:
field_instruction: query
field_output: product_title
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: adammandic87/a5437567-e5b7-4675-9e06-aff4959be6eb
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/11d109720337ba22_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0f27c471-eef6-4e4c-b22c-34a4324ebb4c
wandb_project: birthday-sn56-19-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0f27c471-eef6-4e4c-b22c-34a4324ebb4c
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a5437567-e5b7-4675-9e06-aff4959be6eb
This model is a fine-tuned version of [Qwen/Qwen1.5-0.5B](https://huggingface.co/Qwen/Qwen1.5-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8967
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0003 | 1 | 5.0142 |
| 4.6878 | 0.0034 | 13 | 4.1597 |
| 4.033 | 0.0067 | 26 | 3.9702 |
| 3.861 | 0.0101 | 39 | 3.8967 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mollysama/QRWKV6-32B-Instruct-Preview-GGUF | mollysama | "2024-12-30T04:11:26Z" | 308 | 6 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-12-28T11:41:57Z" | ---
license: apache-2.0
---
|
KappaNeuro/alex-andreev-style | KappaNeuro | "2023-09-14T02:31:10Z" | 5 | 1 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"art",
"style",
"alex andreev",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] | text-to-image | "2023-09-14T02:31:06Z" | ---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- art
- style
- alex andreev
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Alex Andreev style page
widget:
- text: Alex Andreev style - there are sandcastles on the beach that have been built before. A girl with a balloon watches her parents chatting.
- text: Alex Andreev style - extremely bizarre, the tapered light at the close of day, ethereal ophidian, elaborate layered blown-glass mercurial armor, whorls of annihilating energy, inside a twisting transcendental dimension beyond eternity, anime, style of stalenhag
- text: Alex Andreev style - Phone wallpaper post apocalyptical landscape by Simon Stlenhag, tall alien creature in the horizon
- text: Alex Andreev style - In the style of a dark New Yorker cartoon, create an image of a person standing on a bridge over a river, contemplating jumping off, with a flock of crows circling above.
- text: Alex Andreev style - Step into the surreal and atmospheric world of Alex Andreev as you reimagine the vibrant cityscape of Bangkok. Infuse the skyline with dreamlike elements, blurring the boundaries between reality and imagination. Utilize a soft and ethereal color palette, with gentle pastel tones and subtle gradients. Create a sense of mystery and enchantment by incorporating floating structures, whimsical characters, and surreal architectural elements. Let the composition emanate a serene and otherworldly ambiance that reflects Andreev's unique artistic style.
- text: Alex Andreev style - Alex Andreev's mesmerizing artwork showcasing an electronic brain, imbued with his signature surrealistic style, ethereal tones and dreamlike atmosphere, seamlessly merging the realms of technology and imagination
- text: Alex Andreev style - a surrealist image with a fisherman with very long legs walking in the ocean with his legs submerged and flying killer whales playing in the sky with skyscrapers in the background and an enormous planet in the sky
- text: Alex Andreev style - side-profile view many cultists sitting in toy airplanes chasing a bird with a star-shaped beak, moody atmosphere, clouds, 4k render, in the style of Genndy Tartakovsky
- text: Alex Andreev style - Vast minimalist landscape, a pianist playing a grand piano in the snow in the style of Paul delaroche
- text: Alex Andreev style - a minimal poster of a mysterious man in a suit holding a gun infront of a satellite dish in snow
---
# Alex Andreev style

> Alex Andreev style - there are sandcastles on the beach that have been built before. A girl with a balloon watches her parents chatting.
<p>Alex Andreev is a contemporary Russian artist known for his surreal and dystopian digital artworks. His creations explore themes of isolation, technology, and the human condition, often depicting futuristic and dreamlike environments.</p><p>Andreev's artwork combines digital painting, 3D modeling, and photo manipulation techniques to create unique and otherworldly visuals. His compositions are characterized by imaginative landscapes, intricate architectural structures, and enigmatic characters, all bathed in atmospheric lighting and a muted color palette.</p><p>One of his notable series is "Inner World," where he delves into the concept of introspection and explores the complexity of human emotions and thoughts. Through his artwork, Andreev invites viewers to contemplate the depths of the subconscious and the mysteries of the mind.</p><p>His style often incorporates elements of cyberpunk and science fiction, with a touch of dark surrealism. The juxtaposition of organic and mechanical elements in his artworks highlights the ever-growing influence of technology on our lives and the potential consequences it may bring.</p><p>Andreev's artwork has gained international recognition, with exhibitions and features in art galleries and publications worldwide. His unique vision and ability to create thought-provoking and visually stunning digital art have made him a notable figure in the realm of contemporary surrealism.</p><p>Alex Andreev's digital artworks transport viewers to imaginary worlds filled with intrigue, symbolism, and a sense of foreboding. His skillful use of digital tools and his exploration of the human psyche make his artwork both visually captivating and intellectually engaging, inviting viewers to question their own reality and reflect on the complexities of the human experience.</p>
## Image examples for the model:

> Alex Andreev style - extremely bizarre, the tapered light at the close of day, ethereal ophidian, elaborate layered blown-glass mercurial armor, whorls of annihilating energy, inside a twisting transcendental dimension beyond eternity, anime, style of stalenhag

> Alex Andreev style - Phone wallpaper post apocalyptical landscape by Simon Stlenhag, tall alien creature in the horizon

> Alex Andreev style - In the style of a dark New Yorker cartoon, create an image of a person standing on a bridge over a river, contemplating jumping off, with a flock of crows circling above.

> Alex Andreev style - Step into the surreal and atmospheric world of Alex Andreev as you reimagine the vibrant cityscape of Bangkok. Infuse the skyline with dreamlike elements, blurring the boundaries between reality and imagination. Utilize a soft and ethereal color palette, with gentle pastel tones and subtle gradients. Create a sense of mystery and enchantment by incorporating floating structures, whimsical characters, and surreal architectural elements. Let the composition emanate a serene and otherworldly ambiance that reflects Andreev's unique artistic style.

> Alex Andreev style - Alex Andreev's mesmerizing artwork showcasing an electronic brain, imbued with his signature surrealistic style, ethereal tones and dreamlike atmosphere, seamlessly merging the realms of technology and imagination

> Alex Andreev style - a surrealist image with a fisherman with very long legs walking in the ocean with his legs submerged and flying killer whales playing in the sky with skyscrapers in the background and an enormous planet in the sky

> Alex Andreev style - side-profile view many cultists sitting in toy airplanes chasing a bird with a star-shaped beak, moody atmosphere, clouds, 4k render, in the style of Genndy Tartakovsky

> Alex Andreev style - Vast minimalist landscape, a pianist playing a grand piano in the snow in the style of Paul delaroche

> Alex Andreev style - a minimal poster of a mysterious man in a suit holding a gun infront of a satellite dish in snow
|
amitjohn007/electra-finetuned-squad | amitjohn007 | "2022-11-11T16:54:22Z" | 59 | 0 | transformers | [
"transformers",
"tf",
"electra",
"question-answering",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] | question-answering | "2022-11-11T14:50:14Z" | ---
tags:
- generated_from_keras_callback
model-index:
- name: amitjohn007/electra-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# amitjohn007/electra-finetuned-squad
This model is a fine-tuned version of [ahotrod/electra_large_discriminator_squad2_512](https://huggingface.co/ahotrod/electra_large_discriminator_squad2_512) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2298
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16599, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 0.5733 | 0 |
| 0.3829 | 1 |
| 0.2298 | 2 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
NX2411/wav2vec2-large-xlsr-korean-demo3 | NX2411 | "2022-08-14T15:58:45Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:zeroth_korean",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-08-14T11:56:17Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- zeroth_korean
model-index:
- name: wav2vec2-large-xlsr-korean-demo3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-korean-demo3
This model is a fine-tuned version of [NX2411/wav2vec2-large-xlsr-korean-demo-no-LM](https://huggingface.co/NX2411/wav2vec2-large-xlsr-korean-demo-no-LM) on the zeroth_korean dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8265
- Wer: 0.5090
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.6157 | 2.6 | 400 | 0.6686 | 0.6386 |
| 0.4643 | 5.19 | 800 | 0.7036 | 0.6086 |
| 0.3038 | 7.79 | 1200 | 0.6960 | 0.5817 |
| 0.2229 | 10.39 | 1600 | 0.7358 | 0.5571 |
| 0.178 | 12.99 | 2000 | 0.8221 | 0.5636 |
| 0.153 | 15.58 | 2400 | 0.8575 | 0.5691 |
| 0.129 | 18.18 | 2800 | 0.7809 | 0.5297 |
| 0.1141 | 20.78 | 3200 | 0.8077 | 0.5441 |
| 0.0994 | 23.38 | 3600 | 0.8087 | 0.5209 |
| 0.0917 | 25.97 | 4000 | 0.8176 | 0.5149 |
| 0.0823 | 28.57 | 4400 | 0.8265 | 0.5090 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
thdangtr/blip_recipe1m_title_v5 | thdangtr | "2024-05-08T17:24:36Z" | 64 | 0 | transformers | [
"transformers",
"safetensors",
"blip",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2024-05-08T11:11:04Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
facebook/mgenre-wiki | facebook | "2023-01-24T17:11:18Z" | 560 | 27 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"mbart",
"text2text-generation",
"retrieval",
"entity-retrieval",
"named-entity-disambiguation",
"entity-disambiguation",
"named-entity-linking",
"entity-linking",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bm",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"ff",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gn",
"gu",
"ha",
"he",
"hi",
"hr",
"ht",
"hu",
"hy",
"id",
"ig",
"is",
"it",
"ja",
"jv",
"ka",
"kg",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lg",
"ln",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"qu",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"ss",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"ti",
"tl",
"tn",
"tr",
"uk",
"ur",
"uz",
"vi",
"wo",
"xh",
"yo",
"zh",
"arxiv:2103.12528",
"arxiv:2001.08210",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-06-08T09:25:11Z" | ---
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bm
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- ff
- fi
- fr
- fy
- ga
- gd
- gl
- gn
- gu
- ha
- he
- hi
- hr
- ht
- hu
- hy
- id
- ig
- is
- it
- ja
- jv
- ka
- kg
- kk
- km
- kn
- ko
- ku
- ky
- la
- lg
- ln
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- no
- om
- or
- pa
- pl
- ps
- pt
- qu
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- ss
- su
- sv
- sw
- ta
- te
- th
- ti
- tl
- tn
- tr
- uk
- ur
- uz
- vi
- wo
- xh
- yo
- zh
tags:
- retrieval
- entity-retrieval
- named-entity-disambiguation
- entity-disambiguation
- named-entity-linking
- entity-linking
- text2text-generation
---
# mGENRE
The mGENRE (multilingual Generative ENtity REtrieval) system as presented in [Multilingual Autoregressive Entity Linking](https://arxiv.org/abs/2103.12528) implemented in pytorch.
In a nutshell, mGENRE uses a sequence-to-sequence approach to entity retrieval (e.g., linking), based on fine-tuned [mBART](https://arxiv.org/abs/2001.08210) architecture. GENRE performs retrieval generating the unique entity name conditioned on the input text using constrained beam search to only generate valid identifiers. The model was first released in the [facebookresearch/GENRE](https://github.com/facebookresearch/GENRE) repository using `fairseq` (the `transformers` models are obtained with a conversion script similar to [this](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bart/convert_bart_original_pytorch_checkpoint_to_pytorch.py).
This model was trained on 105 languages from Wikipedia.
## BibTeX entry and citation info
**Please consider citing our works if you use code from this repository.**
```bibtex
@article{decao2020multilingual,
author = {De Cao, Nicola and Wu, Ledell and Popat, Kashyap and Artetxe, Mikel
and Goyal, Naman and Plekhanov, Mikhail and Zettlemoyer, Luke
and Cancedda, Nicola and Riedel, Sebastian and Petroni, Fabio},
title = "{Multilingual Autoregressive Entity Linking}",
journal = {Transactions of the Association for Computational Linguistics},
volume = {10},
pages = {274-290},
year = {2022},
month = {03},
issn = {2307-387X},
doi = {10.1162/tacl_a_00460},
url = {https://doi.org/10.1162/tacl\_a\_00460},
eprint = {https://direct.mit.edu/tacl/article-pdf/doi/10.1162/tacl\_a\_00460/2004070/tacl\_a\_00460.pdf},
}
```
## Usage
Here is an example of generation for Wikipedia page disambiguation:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
# OPTIONAL: load the prefix tree (trie), you need to additionally download
# https://huggingface.co/facebook/mgenre-wiki/blob/main/trie.py and
# https://huggingface.co/facebook/mgenre-wiki/blob/main/titles_lang_all105_trie_with_redirect.pkl
# that is fast but memory inefficient prefix tree (trie) -- it is implemented with nested python `dict`
# NOTE: loading this map may take up to 10 minutes and occupy a lot of RAM!
# import pickle
# from trie import Trie
# with open("titles_lang_all105_marisa_trie_with_redirect.pkl", "rb") as f:
# trie = Trie.load_from_dict(pickle.load(f))
# or a memory efficient but a bit slower prefix tree (trie) -- it is implemented with `marisa_trie` from
# https://huggingface.co/facebook/mgenre-wiki/blob/main/titles_lang_all105_marisa_trie_with_redirect.pkl
# from genre.trie import MarisaTrie
# with open("titles_lang_all105_marisa_trie_with_redirect.pkl", "rb") as f:
# trie = pickle.load(f)
tokenizer = AutoTokenizer.from_pretrained("facebook/mgenre-wiki")
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/mgenre-wiki").eval()
sentences = ["[START] Einstein [END] era un fisico tedesco."]
# Italian for "[START] Einstein [END] was a German physicist."
outputs = model.generate(
**tokenizer(sentences, return_tensors="pt"),
num_beams=5,
num_return_sequences=5,
# OPTIONAL: use constrained beam search
# prefix_allowed_tokens_fn=lambda batch_id, sent: trie.get(sent.tolist()),
)
tokenizer.batch_decode(outputs, skip_special_tokens=True)
```
which outputs the following top-5 predictions (using constrained beam search)
```
['Albert Einstein >> it',
'Albert Einstein (disambiguation) >> en',
'Alfred Einstein >> it',
'Alberto Einstein >> it',
'Einstein >> it']
``` |
hfdsajkfd/my-new-shiny-tokenizer | hfdsajkfd | "2024-05-19T18:40:03Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-05-18T19:59:02Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
daniel40/2d8262d0-0c0a-445f-94e8-492837710d7b | daniel40 | "2025-03-01T23:59:58Z" | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:NousResearch/Genstruct-7B",
"base_model:adapter:NousResearch/Genstruct-7B",
"region:us"
] | null | "2025-03-01T23:59:42Z" | ---
library_name: peft
tags:
- generated_from_trainer
base_model: NousResearch/Genstruct-7B
model-index:
- name: daniel40/2d8262d0-0c0a-445f-94e8-492837710d7b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# daniel40/2d8262d0-0c0a-445f-94e8-492837710d7b
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9905
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
BorelTHU/vqgan-16x16 | BorelTHU | "2024-12-02T13:59:26Z" | 15 | 1 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"dataset:ILSVRC/imagenet-1k",
"license:mit",
"region:us"
] | null | "2024-12-02T09:28:52Z" | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
license: mit
datasets:
- ILSVRC/imagenet-1k
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Docs: Please see [github](https://github.com/zbr17/OptVQ) for more details. |
jhsmith/finetuning_bm25_small | jhsmith | "2023-12-04T23:33:40Z" | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2023-12-03T22:29:51Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2 with parameters:
```
{'batch_size': 10, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5.0}
```
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 0.0001
},
"scheduler": "warmuplinear",
"steps_per_epoch": null,
"warmup_steps": 1,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
peulsilva/qwen-0.5b-instruct-summary-pt-rank8 | peulsilva | "2025-03-05T23:10:26Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-05T23:09:35Z" | ---
base_model: unsloth/qwen2-0.5b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** peulsilva
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2-0.5b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
JamesHongJey/flan-t5-small-chat | JamesHongJey | "2024-05-22T05:23:16Z" | 113 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-small",
"base_model:finetune:google/flan-t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-05-22T05:23:03Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: google/flan-t5-small
metrics:
- rouge
model-index:
- name: flan-t5-small-chat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-small-chat
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4063
- Rouge1: 12.3084
- Rouge2: 4.6455
- Rougel: 12.1876
- Rougelsum: 12.1825
- Gen Len: 16.2885
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.6252 | 1.0 | 2000 | 2.4694 | 12.2248 | 4.6103 | 12.1019 | 12.0965 | 15.146 |
| 2.5362 | 2.0 | 4000 | 2.4252 | 12.2682 | 4.7742 | 12.136 | 12.1353 | 15.9622 |
| 2.466 | 3.0 | 6000 | 2.4110 | 12.0624 | 4.4491 | 11.9607 | 11.954 | 16.2845 |
| 2.459 | 4.0 | 8000 | 2.4063 | 12.3084 | 4.6455 | 12.1876 | 12.1825 | 16.2885 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
AathifMohammed/t5base | AathifMohammed | "2024-03-03T12:00:45Z" | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"base_model:adapter:google-t5/t5-base",
"license:apache-2.0",
"region:us"
] | null | "2024-02-22T06:15:31Z" | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
metrics:
- rouge
base_model: google-t5/t5-base
model-index:
- name: t5base-ILC
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5base-ILC
This model is a fine-tuned version of [google-t5/t5-base](https://huggingface.co/google-t5/t5-base) on [ILC dataset](https://huggingface.co/datasets/d0r1h/ILC).
It achieves the following results on the evaluation set:
- Loss: 3.1984
- Rouge1: 8.381
- Rouge2: 3.916
- Rougel: 7.0243
- Rougelsum: 7.8617
- Gen Len: 18.9833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 10.5936 | 0.49 | 500 | 4.4985 | 7.204 | 2.8587 | 5.9813 | 6.774 | 18.9665 |
| 3.9459 | 0.97 | 1000 | 3.1984 | 8.381 | 3.916 | 7.0243 | 7.8617 | 18.9833 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.39.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 |
rfanucchi/Taxi_reinforcementelearning_course | rfanucchi | "2023-10-13T21:28:13Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-10-13T21:17:37Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi_reinforcementelearning_course
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="rfanucchi/Taxi_reinforcementelearning_course", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
SatCat/rl_course_vizdoom_health_gathering_supreme | SatCat | "2023-02-24T07:31:08Z" | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-02-24T07:31:02Z" | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.42 +/- 6.12
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r SatCat/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
TaoOfAGI/gemma-Code-Instruct-Finetune-test-haojing | TaoOfAGI | "2024-04-07T16:07:16Z" | 113 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-07T16:04:11Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AKulk/wav2vec2-base-timit-epochs10 | AKulk | "2022-02-14T12:49:09Z" | 22 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-03-02T23:29:04Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-epochs10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-epochs10
This model is a fine-tuned version of [AKulk/wav2vec2-base-timit-epochs5](https://huggingface.co/AKulk/wav2vec2-base-timit-epochs5) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 5
- total_train_batch_size: 80
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
huggingtweets/ccwaterboy | huggingtweets | "2021-05-21T22:00:18Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1041707865583566850/b2U1-eTk_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Robbie Wakefield 🤖 AI Bot </div>
<div style="font-size: 15px">@ccwaterboy bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@ccwaterboy's tweets](https://twitter.com/ccwaterboy).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 1680 |
| Retweets | 143 |
| Short tweets | 98 |
| Tweets kept | 1439 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/dz0al5jb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ccwaterboy's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3lhihgx6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3lhihgx6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ccwaterboy')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
phungkhaccuong/ea91a64c-2d16-3a62-8237-e430e68096ea | phungkhaccuong | "2025-01-09T19:51:55Z" | 14 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:DeepMount00/Llama-3-8b-Ita",
"base_model:adapter:DeepMount00/Llama-3-8b-Ita",
"license:llama3",
"region:us"
] | null | "2025-01-09T19:35:02Z" | ---
library_name: peft
license: llama3
base_model: DeepMount00/Llama-3-8b-Ita
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ea91a64c-2d16-3a62-8237-e430e68096ea
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: DeepMount00/Llama-3-8b-Ita
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 7c8a1b92994c4840_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7c8a1b92994c4840_train_data.json
type:
field_input: chosen
field_instruction: prompt
field_output: rejected
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: phungkhaccuong/ea91a64c-2d16-3a62-8237-e430e68096ea
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/7c8a1b92994c4840_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a63cb250-300f-4e29-89e1-8629b73a4704
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: a63cb250-300f-4e29-89e1-8629b73a4704
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# ea91a64c-2d16-3a62-8237-e430e68096ea
This model is a fine-tuned version of [DeepMount00/Llama-3-8b-Ita](https://huggingface.co/DeepMount00/Llama-3-8b-Ita) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1927
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0009 | 1 | 2.9087 |
| 2.6251 | 0.0086 | 10 | 2.6416 |
| 2.4174 | 0.0172 | 20 | 2.3582 |
| 2.2388 | 0.0258 | 30 | 2.2458 |
| 2.1465 | 0.0344 | 40 | 2.2001 |
| 2.0986 | 0.0430 | 50 | 2.1927 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mabdelm2/whisper-small-bangla-english | mabdelm2 | "2023-12-06T23:13:47Z" | 10 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-12-06T11:50:26Z" | ---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-small-bangla-english
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-bangla-english
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.9861
- Wer: 421.4861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.0134 | 27.78 | 1000 | 4.2948 | 340.9132 |
| 0.0002 | 55.56 | 2000 | 4.8454 | 429.7225 |
| 0.0001 | 83.33 | 3000 | 4.9496 | 410.4745 |
| 0.0001 | 111.11 | 4000 | 4.9861 | 421.4861 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
remmymilkyway/deeprl-course-unit1 | remmymilkyway | "2024-01-16T03:51:50Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-01-16T03:51:28Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 233.31 +/- 63.40
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
netcat420/MFANN3bv0.10.10 | netcat420 | "2024-05-29T00:35:15Z" | 139 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"mergekit",
"merge",
"en",
"arxiv:2306.01708",
"base_model:liminerity/Phigments12",
"base_model:merge:liminerity/Phigments12",
"base_model:netcat420/MFANN3bv0.10",
"base_model:merge:netcat420/MFANN3bv0.10",
"base_model:netcat420/MFANN3bv0.6",
"base_model:merge:netcat420/MFANN3bv0.6",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-28T22:59:34Z" | ---
base_model:
- netcat420/MFANN3bv0.6
- liminerity/Phigments12
- netcat420/MFANN3bv0.10
library_name: transformers
tags:
- mergekit
- merge
license: mit
language:
- en
pipeline_tag: text-generation
---
# MFANN3bv0.10.10
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [liminerity/Phigments12](https://huggingface.co/liminerity/Phigments12) as a base.
### Models Merged
The following models were included in the merge:
* [netcat420/MFANN3bv0.6](https://huggingface.co/netcat420/MFANN3bv0.6)
* [netcat420/MFANN3bv0.10](https://huggingface.co/netcat420/MFANN3bv0.10)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: netcat420/MFANN3bv0.6
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
- model: netcat420/MFANN3bv0.10
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
merge_method: ties
base_model: liminerity/Phigments12
parameters:
normalize: true
int8_mask: true
dtype: float16
``` |
vasugoel/K-12BERT | vasugoel | "2022-07-14T07:54:54Z" | 42 | 9 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"education",
"K-12",
"en",
"dataset:vasugoel/K-12Corpus",
"arxiv:2205.12335",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-07-05T11:37:01Z" | ---
language: en
tags:
- education
- K-12
license: apache-2.0
datasets:
- vasugoel/K-12Corpus
---
## K-12BERT model
K-12BERT is a model trained by performing continued pretraining on the K-12Corpus. Since, performance of BERT like models on domain adaptive tasks have shown great progress, we noticed the lack of such a model for the education domain (especially K-12 education). On that end we present K-12BERT, a BERT based model trained on our custom curated dataset, extracted from both open and proprietary education resources.
The model was trained using an MLM objective and in a continued pretraining fashion, due to the lack of resources available to train the model from ground up. This also, allowed us to save a lot of computational resources and utilize the existing knowledge of BERT. To that extent we also preserve the original vocabulary of BERT, to evaluate the performance under those conditions.
## Intended uses
We hope that the community especially researchers and professionals engaged in the education domain, are able to utilize this model to advance the domain of AI in education. With many fold usages for online education platforms, we hope we can contribute towards advancing education resources for the upcoming generation.
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel, AutoTokenizer, AutoModelForMaskedLM
tokenizer = BertTokenizer.from_pretrained('vasugoel/K-12BERT') # AutoTokenizer.from_pretrained('vasugoel/K-12BERT')
model = BertModel.from_pretrained("vasugoel/K-12BERT") # AutoModelForMaskedLM.from_pretrained('vasugoel/K-12BERT')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### BibTeX entry and citation info
```bibtex
@misc{https://doi.org/10.48550/arxiv.2205.12335,
doi = {10.48550/ARXIV.2205.12335},
url = {https://arxiv.org/abs/2205.12335},
author = {Goel, Vasu and Sahnan, Dhruv and V, Venktesh and Sharma, Gaurav and Dwivedi, Deep and Mohania, Mukesh},
keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {K-12BERT: BERT for K-12 education},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
``` |
HoKa/amir-jafari | HoKa | "2025-01-28T14:46:04Z" | 8 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2024-12-17T17:32:13Z" | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Amir Jafari
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# Amir Jafari
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `Amir Jafari` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
cuongdev/hntanh | cuongdev | "2024-10-16T13:23:25Z" | 29 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-10-16T13:18:04Z" | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### hntAnh Dreambooth model trained by cuongdev with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
zeerakwyne/dreambooth_lora_model_jupyter | zeerakwyne | "2023-10-14T00:24:56Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-10-12T20:46:42Z" |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - zeerakwyne/dreambooth_lora_model_jupyter
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
mutableonez/spectris_machina2 | mutableonez | "2023-11-16T02:52:49Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-11-16T01:49:41Z" | ---
license: creativeml-openrail-m
---
|
minaiosu/lyh12169198 | minaiosu | "2025-02-01T16:29:22Z" | 436 | 0 | diffusers | [
"diffusers",
"safetensors",
"region:us"
] | null | "2024-11-26T19:36:33Z" | # For PC local use and backup purposes. |
Banano/banchan-anything-v3-0 | Banano | "2023-05-16T09:38:29Z" | 42 | 5 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2022-12-19T21:13:49Z" | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
language:
- en
library_name: diffusers
---
# Banano Chan - Anything v3.0 (banchan-anything-v3.0)
A potassium rich latent diffusion model. [Anything V3.0](https://huggingface.co/Linaqruf/anything-v3.0) trained to the likeness of [Banano Chan](https://twitter.com/Banano_Chan/). The digital waifu embodiment of [Banano](https://www.banano.cc), a feeless and super fast meme cryptocurrency.
This model is intended to produce high-quality, highly detailed anime style with just a few prompts. Like other anime-style Stable Diffusion models, it also supports danbooru tags to generate images.
e.g. `banchan, beautiful eyes, detail, flower meadow, cumulonimbus clouds, lighting, detailed sky, garden`
A [detailed usage guide](https://huggingface.co/Banano/banchan-anything-v3-0/blob/main/doc/README.md) is also available if you are new to Stable Diffusion, image generation and prompting.
Share your pictures in the [#banano-ai-art Discord channel](https://discord.com/channels/415935345075421194/991823100054355998) or [Community](https://huggingface.co/pbuyle/banchan-anything-v3-0/discussions) tab.
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Sample pictures:











--
Dreambooth model trained with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook.
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
TFOCUS/mkw_5 | TFOCUS | "2025-03-22T08:25:17Z" | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | "2025-03-22T07:07:58Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
TaylorAI/bert-d128-l6 | TaylorAI | "2024-05-17T18:26:26Z" | 106 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2024-05-17T18:24:54Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
saadhasan02/Llama-3.2-1B-finetuned | saadhasan02 | "2025-02-24T21:17:06Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2025-02-24T21:16:41Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
thomasht86/mms-tts-nob-scratch | thomasht86 | "2024-03-05T14:18:41Z" | 9 | 0 | transformers | [
"transformers",
"safetensors",
"vits",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-03-05T14:17:51Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
OckerGui/videomae-base-finetuned-ESBD | OckerGui | "2023-10-15T04:02:02Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"videomae",
"video-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | "2023-10-15T03:18:28Z" | ---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ESBD
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ESBD
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6116
- Accuracy: 0.3095
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 300
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4599 | 0.09 | 27 | 1.3408 | 0.3333 |
| 1.217 | 1.09 | 54 | 1.3656 | 0.3571 |
| 1.2652 | 2.09 | 81 | 1.2593 | 0.3095 |
| 0.797 | 3.09 | 108 | 0.9102 | 0.5952 |
| 1.2926 | 4.09 | 135 | 0.9243 | 0.6429 |
| 0.4508 | 5.09 | 162 | 0.9276 | 0.6905 |
| 0.3649 | 6.09 | 189 | 0.6216 | 0.7857 |
| 0.1679 | 7.09 | 216 | 1.1307 | 0.6667 |
| 0.1277 | 8.09 | 243 | 0.9728 | 0.6667 |
| 0.0665 | 9.09 | 270 | 0.8415 | 0.7619 |
| 0.0148 | 10.09 | 297 | 0.7911 | 0.7857 |
| 0.0136 | 11.01 | 300 | 0.7950 | 0.7857 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
DANIELALISOV/meruyert31 | DANIELALISOV | "2025-01-18T13:49:21Z" | 27 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-01-12T08:31:46Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: meruert
---
# Meruyert31
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `meruert` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('DANIELALISOV/meruyert31', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
4kew/bert-finetuned-squad | 4kew | "2024-01-22T07:13:48Z" | 33 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2024-01-11T03:37:34Z" | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
VERSIL91/990faa12-ebf7-4eaa-99bb-946dc452941c | VERSIL91 | "2025-01-17T04:37:20Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-1.7B-Instruct",
"base_model:adapter:unsloth/SmolLM-1.7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-01-17T04:37:14Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-1.7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a10fc255-750d-404b-9185-82e886347ed7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-1.7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ba5c8e4ea4504b50_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ba5c8e4ea4504b50_train_data.json
type:
field_instruction: dataset
field_output: context
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: null
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/ba5c8e4ea4504b50_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 92f51c1a-1ad0-4dbe-ae86-a5efb54d1815
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 92f51c1a-1ad0-4dbe-ae86-a5efb54d1815
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a10fc255-750d-404b-9185-82e886347ed7
This model is a fine-tuned version of [unsloth/SmolLM-1.7B-Instruct](https://huggingface.co/unsloth/SmolLM-1.7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0047 | 1 | nan |
| 0.0 | 0.0140 | 3 | nan |
| 0.0 | 0.0281 | 6 | nan |
| 0.0 | 0.0421 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
xw17/SmolLM-1.7B-Instruct_finetuned_s03_i | xw17 | "2024-12-02T16:54:18Z" | 139 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-02T16:52:19Z" | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/starcoder2-15b-instruct-GGUF | mradermacher | "2024-11-15T11:18:09Z" | 82 | 0 | transformers | [
"transformers",
"gguf",
"code",
"starcoder2",
"en",
"base_model:TechxGenus/starcoder2-15b-instruct",
"base_model:quantized:TechxGenus/starcoder2-15b-instruct",
"license:bigcode-openrail-m",
"endpoints_compatible",
"region:us"
] | null | "2024-11-15T10:43:49Z" | ---
base_model: TechxGenus/starcoder2-15b-instruct
language:
- en
library_name: transformers
license: bigcode-openrail-m
quantized_by: mradermacher
tags:
- code
- starcoder2
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/TechxGenus/starcoder2-15b-instruct
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/starcoder2-15b-instruct-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-instruct-GGUF/resolve/main/starcoder2-15b-instruct.Q2_K.gguf) | Q2_K | 6.3 | |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-instruct-GGUF/resolve/main/starcoder2-15b-instruct.Q3_K_S.gguf) | Q3_K_S | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-instruct-GGUF/resolve/main/starcoder2-15b-instruct.Q3_K_M.gguf) | Q3_K_M | 8.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-instruct-GGUF/resolve/main/starcoder2-15b-instruct.IQ4_XS.gguf) | IQ4_XS | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-instruct-GGUF/resolve/main/starcoder2-15b-instruct.Q3_K_L.gguf) | Q3_K_L | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-instruct-GGUF/resolve/main/starcoder2-15b-instruct.Q4_0_4_4.gguf) | Q4_0_4_4 | 9.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-instruct-GGUF/resolve/main/starcoder2-15b-instruct.Q4_K_S.gguf) | Q4_K_S | 9.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-instruct-GGUF/resolve/main/starcoder2-15b-instruct.Q4_K_M.gguf) | Q4_K_M | 10.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-instruct-GGUF/resolve/main/starcoder2-15b-instruct.Q5_K_S.gguf) | Q5_K_S | 11.1 | |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-instruct-GGUF/resolve/main/starcoder2-15b-instruct.Q5_K_M.gguf) | Q5_K_M | 11.5 | |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-instruct-GGUF/resolve/main/starcoder2-15b-instruct.Q6_K.gguf) | Q6_K | 13.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-instruct-GGUF/resolve/main/starcoder2-15b-instruct.Q8_0.gguf) | Q8_0 | 17.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
GbrlOl/finetune-embedding-all-MiniLM-L6-v2-geotechnical-test-v3 | GbrlOl | "2025-01-25T21:28:56Z" | 7 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:5005",
"loss:CoSENTLoss",
"arxiv:1908.10084",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2025-01-25T21:28:45Z" | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:5005
- loss:CoSENTLoss
base_model: sentence-transformers/all-MiniLM-L6-v2
widget:
- source_sentence: el depósito presenta estudio sísmico deterministico?
sentences:
- "Fidel Oteiza 1971, of.202 - Providencia – Santiago-Chile. Fono/Fax: (56-2)\
\ 433 3200 - e-mail: [email protected] \ni\nMINERA LAS CENIZAS S.A. \nDEPÓSITO\
\ DE RELAVES DE PASTA CABILDO \nPERMISO SECTORIAL PARA APROBACIÓN DEL PROYECTO\
\ \nTÉCNICO POR PARTE DEL SERNAGEOMIN SEGÚN EL D.S. 248 \nANÁLISIS DE ESTABILIDAD\
\ \n \n1136-ID-GE-IT-01-Rev.0 \nÍNDICE DE CONTENIDOS \n1. INTRODUCCIÓN .............................................................................................\
\ 1 \n2. REVISIÓN DE ANTECEDENTES .................................................................\
\ 2 \n2.1. Geometría del Depósito .................................................................................\
\ 2 \n2.2. Geología .........................................................................................................\
\ 3 \n2.3. Geotecnia del Suelo Natural ..........................................................................\
\ 3 \n2.4. Geotecnia del Material de Estéril ...................................................................\
\ 4 \n2.5. Análisis de estabilidad ...................................................................................\
\ 5 \n2.5.1. Criterios generales ..............................................................................\
\ 5 \n2.5.2. Análisis pseudo-estático ......................................................................\
\ 5 \n2.5.3. Casos de análisis .................................................................................\
\ 6 \n2.5.4. Resultados del análisis de estabilidad .................................................\
\ 6 \n3. ENSAYOS DE LABORATORIO RELAVE UG-2 ......................................\
\ 10 \n3.1. Granulometría e Hidrometría .......................................................................\
\ 10 \n3.2. Clasificación y Límites de Atterberg ...........................................................\
\ 12 \n3.3. Límite de Contracción ..................................................................................\
\ 12 \n3.4. Consolidación por Peso Propio ....................................................................\
\ 13 \n3.5. Proctor Modificado ......................................................................................\
\ 15 \n3.6."
- "Plan de Cierre - Faena Minera Salares Norte | 82 \n \nSe necesita de especial\
\ cuidado al depositar rocas estériles con menores propiedades g eotécnicas que\
\ las \nconsideradas como representativas en este informe, como materiales con\
\ alteración Steam Heated o materiales del \ncuaternario. Estos materiales no\
\ deben ser depositados en los pies de los taludes del depósito, ni en el fondo\
\ de la \ncuenca donde será emplazado el botadero Norte. En vez de eso, se deben\
\ depositar e n pilas horizontales cerca de \nla parte posterior del botadero\
\ Norte, donde hay presencia de material rocoso de mejor calidad. \nEn el mismo\
\ sentido, los resultados de la modelación num érica en condiciones dinámicas\
\ indican que, para corto y \nlargo plazo, los desplazamientos dentro del depósito,\
\ en general, serían menores a 0,8 metros (80 centímetros). Los \nmayores desplazamientos\
\ observados ocurrirán en los bancos de botadero Norte. La aplicación de un evento\
\ sísmico \ncausará desplazamientos en la superficie que solo afectan los bancos\
\ y bermas del depósito. Las partículas se \ndeslizarán y quedarán en las bermas\
\ que tienen 20 metros de ancho. \n Características Geoquímicas \nCon el objeto\
\ de determinar la potencialidad de generación de drenajes desde el botadero Norte\
\ se ha desarrollado \nun estudio de caracterización geoquímica de los materiales\
\ estériles y la disposición en el botadero Norte \nconsiderando su máxima capacidad.\
\ \nDe las muestras analizadas (170 en total para material estéril) 146 muestras\
\ serían representativas de los materiales \na depositar en el botadero Norte\
\ en su máxima capacidad, lo que significa que se ha podido representar un 94,8%\
\ \nde los materiales en la fase final del botadero según los criterios de Litología,\
\ Alteración y Mineralización (LAM), ley \nde Au y zona redox."
- "Sin perjuicio de ello, en este \nplan de cierre temporal se ha hecho un análisis\
\ a nive l de juicio experto respecto de los riesgos \nque se indican en la siguiente\
\ tabla. \nTabla 3-3: Riesgos evaluados Instalaciones Complementarias y Auxiliares.\
\ \nInstalación Riesgos evaluados \nInstalaciones \nComplementarias \ny Auxiliares\
\ \nIA.1) Caída de Personas o animales a desnivel \nIA.2) Caída de objetos o materiales\
\ sobre personas o animales \nIA.3) Afectación a la salud de las personas por\
\ estructuras, \nmateriales y/o suelos contaminados \nFuente: Elaborado por MYMA,\
\ 2019 \n3.1 Evaluación de Riesgos \na) Evaluación de Riesgos previo a la definición\
\ de las medidas de cierre \nUna vez establecida la probabilidad de ocurrencia\
\ de los eventos y la severidad de las \nconsecuencias para las personas y el\
\ medio ambiente, se debe catalogar el límite de aceptabilidad \ndel riesgo."
- source_sentence: ¿Cuál es el correo electrónico de contacto de la empresa VST ubicada
en Santiago, Chile?
sentences:
- "26 \n \n \n85/11382/13328 Proyecto de Cierre Tranque de Relave N°4 Planta\
\ Cabildo, Región de Valparaíso \nPlan de Cierre \n7.2.9 Habilitación de Evacuador\
\ de Emergencia. \nDescrito ampliamente en el ítem 7.2.4.1. \n7.2.10 Cercado\
\ de las Torres Colectoras. \nPara la operación del Tranque de Relave N°4, se\
\ consideraron 6 cámaras colectoras de agua clara, unidas \npor tuberías HDPE.\
\ Se prevé sellar completamente las cámaras, a través de rellenos realizados con\
\ grava, \ncon arena y con relave. Posteriormente a dicho sello se demuele la\
\ porción que sobresale de las lamas \nevitándose los promontorios. \nPara que\
\ los rellenos queden estables y se elimine toda posibilidad de que haya migración\
\ de lamas o de \nlos rellenos, a través de la tubería, el sello de la cámara\
\ se realizará con los siguientes materiales y \nsecuencia constructiva: \n\x7F\
\ Se coloca una primera capa, de a lo menos un metro de altura, sobre el fondo\
\ de la cámara, con \nsobretamaño, superior a 6\". \n\x7F Inmediatamente después\
\ se realiza un relleno de grava arenosa con contenido de grava superior al \n\
50% y arena superior al 30%. Dicha capa debe presentar a lo menos una altura de\
\ un metro. \n\x7F Posteriormente, sobre la grava, se realiza un relleno con arena\
\ de relaves (proveniente del muro), \ntambién con una dimensión mínima de un\
\ metro. \n\x7F Finalmente se realiza un relleno con lamas (secas o con baja humedad)\
\ hasta el nivel de lamas \nexistentes en la cubeta. \n\x7F Todos los rellenos\
\ se colocan sin compactar."
- "En dicho proceso \nes que se libera energía que se traduce en movimientos sísmicos\
\ en superficie. \nEl proyecto, al estar ubicado entre los 25°58’ L.S. y 26°’24\
\ L.S., se relaciona con el segmento sismo tectónico de \nCopiapó, el cual forma\
\ parte de la sección norte del llamado flat-slab. El flat-slab corresponde al\
\ segmento de la zona \nde acoplamiento entre placas tectónicas de Nazca y Sudamericana\
\ que presenta menor ángulo de subducci ón, en \nconsecuencia, no hay un volcanismo\
\ activo relevante. \nEl segmento sismo tectónico de Copiapó se caracteriza por\
\ presentar sismos de grandes rupturas únic as y también, \nen ocasiones, liberación\
\ de energía mediante grupos de sismos medios a grandes (Bar rientos, 2007). En\
\ este tramo, \nlos sismos de magnitud mayor a 6 se concentran costa afuera y\
\ paralelos a ella, con mecanismos de esfuerzo inverso \nde bajo ángul o. Sismos\
\ con mecanismos de falla tensional se observan al interior del continente, en\
\ un número \nbastante menor a los producidos mar afuera (Barrientos, 2007). Ambos\
\ tipos de sismos se producen en la región \nacoplada de subducción entre las\
\ placas de Nazca y Sudamericana. La mayoría de los sismos destructivos registrados\
\ \nen Chile concentran sus puntos focales cercanos al borde costero. A continuación,\
\ en la Tabla 4-7 se presentan \nregistros de eventos de magnitud mayor a 7 Mw\
\ entre los 24° y 28° L.S. y en la Figura 4-5 se muestran espacialmente \nlos\
\ registros mencionados anteriormente."
- "Fidel Oteiza 1971, of.202 - Providencia – Santiago-Chile. Fono/Fax: (56-2)\
\ 433 3200 - e-mail: [email protected] \nANEXO B: ESTUDIO DE PELIGR O SÍSMICO, ESPECTROS\
\ DE \nRESPUESTA Y GENERACIÓN DE REGISTROS ARTIFICIALES \nPARA EL DEPÓSITO DE\
\ PASTA, PLANTA CABILDO"
- source_sentence: ¿Cuál es la resistencia cíclica para un número de ciclos de 30
y una razón de confinamiento de 0,5 kg/cm2?
sentences:
- "Plan de Cierre - Faena Minera Salares Norte | 111 \n \n \nFuente: SRK \nFigura\
\ 8-31: Distancia de Exclusión Entre el Pie del ROM STOCK y el Borde de la Plataforma\
\ 4.473 \n Características \nEl depósito de relaves consiste en un acopio de\
\ relaves previamente filtrados, los cuales serán depositados sobre la \nplataforma\
\ intermedia del botadero Sur (plataforma 4.432 m.s.n.m.), autosoportante que\
\ se construirá en capas de \nentre 30 a 40 c m compactadas mediante rodillo vibratorio,\
\ con un contenido de humedad menor al 20%. La base \ndel depósito de relaves\
\ filtrados, como son las laderas de los cerros y las superficies inclinadas\
\ del botadero sobre \nlas que se apoyará el relave serán impermeabi lizadas mediante\
\ una geomembrana que cubrirá aproximadamente \n533.672 m 2. La tasa de depositación\
\ promedio diaria de relaves es del o rden de 6 ktpd. Las características del\
\ \ndepósito de relaves se muestran a continuación. \nTabla 8-20: Características\
\ del Depósito de Relaves \nCaracterística Valor Aproximado Unidad \nVolumen 14,8\
\ Mm3 \nCapacidad Máxima 24,1 Mt \nCapacidad Proyectada 22,2 Mt \nSuperficie Máxima\
\ 54 ha \nSuperficie Proyectada 51,7 ha \nCota Máxima 4.472 m.s.n.m. \nCota Máxima\
\ Proyectada 4.469,2 m.s.n.m. \nCota Mínima 4.432 m.s.n.m."
- "Fidel Oteiza 1971, of.202 - Providencia – Santiago-Chile. Fono/Fax: (56-2)\
\ 433 3200 - e-mail: [email protected] \n20\n \n \nFigura 11. Resistencia Cíclica\
\ vs N° de Ciclos de Carga Relave UG-2 \nEn términos generales, las curvas mostradas\
\ en la Figura 11 presentan un \ncomportamiento típico, en donde para un nivel\
\ de confinamiento dado, a \nmedida que se aumenta la resistencia cíclica, disminuye\
\ el número de ciclos \nnecesarios para alcanzar la movilidad cíclica. Además,\
\ a medida que aumenta la \npresión efectiva de confinamiento, para una misma\
\ resistencia cíclica, \ndisminuye el número de ciclos necesarios para alcanzar\
\ la movilidad cíclica \nEn este caso particular, se puede observa r que para\
\ un número de ciclos de 30, \nla resistencia cíclica es aproximadamente 0,34\
\ para una razón de confinamiento \nde 0,5 kg/cm2, mientras que para un c onfinamiento\
\ de 1,0 kg/cm 2 la resistencia \ncíclica es 0,23."
- "PLAN DE CIERRE TEMPORAL – FAENA MINERA EL TOQUI \n Sociedad Contractual Minera\
\ El Toqui \nCapítulo 4 - Caracterización del Entorno \n \n \nREVISIÓN [0] \n\
4-46 \n \nPilgerodendron, Macrachaeniun, Combera, entre otros. Hay también varios\
\ elementos de origen \ntropical, como lo son Chusquea y Myrceugenella. \nDesde\
\ el punto de vista fitogeográfico pueden distinguirse cinco distritos: Maulino,\
\ Valdiviano, \nMagallánico, del Pehuén y del Bosque Caducifolio. \nLa Faena El\
\ Toqui se ubica en el distrito valdiviano, en donde se desarrollan numerosas\
\ \nasociaciones boscosas que se distribuyen de acuerdo con la altitud, la orientación\
\ y el declive. En \ncasi todas ellas figura Nothof agus dombeyi (coigüe), asociado\
\ a veces con Eucryphia cordifolia \n(ulmo), Gevuina avellana (avellano), Persea\
\ lingue (lingue), Aextoxicum punctatum (olivillo), \nWeinmannia trichosperma\
\ (tineo), Laureliopsis philippiana (tepa) y Dasyphyllum diacanthoides \n(palo\
\ santo). \nOtras veces Nothofagus dombeyi se asocia con Nothofagus obliqua (roble)\
\ y Nothofagus procera \n(raulí), acompañados de diversas especies arbóreas comunes\
\ a la comunidad anterior; o bien se \nasocia con Fitzroya cupressoides (alerce),\
\ gigantesca coníf era que puede alcanzar 50 metros de \naltura y 3 metros de\
\ diámetro, Podocarpus nubigena (mañiu), Pilgerodendron uvifera (ciprés de las\
\ \nguaytecas) y Saxegothaea conspicua (mañiu hembra)."
- source_sentence: Se menciona en el documento que la planta presente cierres temporales?
sentences:
- "Fidel Oteiza 1971, of.202 - Providencia – Santiago-Chile. Fono/Fax: (56-2)\
\ 433 3200 - e-mail: [email protected] \n19\nLa movilidad cíclica es un fenómeno\
\ que se origina en suelos solicitados con \ncarga cíclica y rápida como en un\
\ sismo, de tal manera que se desarrolla un \ncomportamiento no drenado y se genera\
\ un gran incremento de las presiones de \nporos. Como consecuencia, se produce\
\ una disminución de rigidez, lo que se \ntraduce en grandes deformaciones. \n\
La experiencia muestra que tanto los suel os sueltos (contractivos) como densos\
\ \n(dilatantes) pueden desarro llar movilidad cíclica, pe ro las deformaciones\
\ en \ncada ciclo son considerablemente mayores en el caso de suelos sueltos y\
\ además \nson crecientes a medida que aumentan los ciclos (Ishihara, 1985) \n\
Para verificar la eventual licuefacción por movilidad cíclica, que pudiese darse\
\ \nen caso de un evento sísmico, su evalua ción se realizará mediante el método\
\ \nsimplificado de Seed (Seed & Idriss, 1971). \nLa Figura 11 muestra las curvas\
\ obtenidas de resistencia cíclica Rc lab v/s el \nnúmero de ciclos necesarios\
\ para alcanzar la movilidad cíclica, para las distintas \npresiones de confinamientos\
\ ensayadas. El criterio de falla utilizado para definir \nla cantidad de ciclos\
\ en que se tiene movilidad cíclica, fue el 100 % de exceso \nde presión de\
\ poros. \nEn la Figura 11 se muestra además gráficamente la resistencia cíclica,\
\ que \ncorresponde para cada presión de confin amiento ensayada y para un número\
\ de \nciclos de carga de 30, que representa un sismo severo magnitud 8,0 (Seed\
\ & \nIdriss, 1971)."
- "Junto al mineral de interés han aparecido \nperiódicamente “bolones” o rodad\
\ os de diferentes tamaños, situación importante de \nconocer para el tratamiento\
\ final de los taludes. \nLa mediana altura del cerro donde se ubica el Yacimiento,\
\ la depositación oportuna de \nlos lodos debido a la corta distancia de la Planta\
\ de Proceso y el ar ranque de mineral \nordenado haciendo bodega en la zona inferior\
\ del Rajo , han contribuido a que \nmorfológicamente el terreno no haya sufrido\
\ grandes alteraciones debido a la \nintervención del suelo. Una comprobación\
\ del buen trabajo realizado es la nula \ndesestabilización de laderas, taludes\
\ y Rampas después del Terremoto del 27/F/2010. \n \n6.2.- Diseño del talud final\
\ del Rajo \n \nSerá importante tener en cuenta que antes de disminuir el ángulo\
\ de talud final, es \nimperiosamente necesario, complementar el Relleno en la\
\ zona inferior del Rajo con la \nmáxima cantidad de m3 de lodos para asentar\
\ su base, con esto se cumple lo señalado \nen la aprobación ambiental y además\
\ se facilitan los trabajos tendientes a suavizar la \npendiente. El ángulo del\
\ PIT final de talud en toda la cara externa del rajo no debe \nsobrepasar los\
\ 45°. \nConsiderando la existencia de los Peñascos y Bolones que repentinamente\
\ aparecen \ncuando se excavan los Bancos en el sentido del avance, desde el Norte\
\ hacia el Sur y \nsiguiendo las instrucciones ante riores, se obtendrá una estabilidad\
\ adecuada que \nimpedirá posibles Remociones en Masa después del Abandono. También\
\ se debe tener"
- "PLAN DE CIERRE TEMPORAL – FAENA MINERA EL TOQUI \n Sociedad Contractual Minera\
\ El Toqui \nCapítulo 7 – Análisis de las Instalaciones \n \n \nREVISIÓN [0]\
\ \n7-68 \n \npara el cierre a causa de un sismo”, debido a que el método Room\
\ & Pil lar no considera dejar \ngrandes cavidades que pudiesen generar subsidencias\
\ y producir algún colapso en superficie y \nademás el propio método incluye dejar\
\ pilares remanentes que soportan las cámaras luego de ser \nexplotadas. Mientras\
\ que el resto de los riesgos fueron evaluados y luego se propuso sus \nrespectivas\
\ medidas de cierre para su control."
- source_sentence: ¿Cuál es la población total de la comuna de Catemu según el Censo
2017?
sentences:
- "Plan de Cierre PAG Planta Catemu \nCompañía Explotadora de Minas (CEMIN) \n \n\
\ \n Rev. 0 | 20-04-18 25 | 158 \n3.5 Medio humano \n \nLa localidad más próxima\
\ a a la planta corresponde a la localidad del mismo nombre. Catemu , comuna \n\
perteneciente a la provincia de San Felipe de Aconcagua, de la Región de Valparaíso\
\ . La planta se encuentra \nubicada a 85 km al norte de Santiago y 95 km del\
\ puerto de Valparaíso. \n \nSegún información del Instituto Nacional de Estadísticas\
\ (I NE) para el Ce nso 2017, la población total de la \ncomuna Catemu es de 13.998\
\ habitantes, correspondiendo a los totales de 6.982 hombres y 7.016 mujeres.\
\ \nEl número total de viviendas es 5.171 y densidad de población de 38,8 (Hab/km\
\ 2). \n \nLa actividad económica que predomina en la comuna de Catemu corresponde\
\ al sector de A gricultura, caza, \nganadería y silvicultura (correspondiendo\
\ a un total del 32,7%). Respecto a l a actividad minera , esta \nrepresenta\
\ solo el 2,8% de la mano de obra comunal. \n \nEl ingreso autónomo promedio del\
\ hogar es, en el caso de la comuna de Catemu, inferior al promedio \nregional\
\ ($618.371 para la región de Valparaíso), alcanzando los $415.146. Los niveles\
\ de pobreza son, según \nla encuesta CASEN del año 2009, son relativamente bajos,\
\ para la comuna de Catemu es de 8,17%, inferior al \npromedio regional (15,04%).\
\ Los niveles de desocupación se encuentran bajo el promedio regional (12% en\
\ \nla Región de Valparaíso) llegando a 9,5%."
- "4.1. Sismo de Diseño \nComo parte del proyecto, VST solicitó a SyS Ingenieros\
\ Consultores Ltda. \n(SyS) el desarrollo de un estudio sísmico, con el objeto\
\ de determinar el sismo \nde diseño del proyecto. El informe emitido sepresenta\
\ en el ANEXO B. \nEn este estudio, se establece el marco sismogénico de la zona\
\ del proyecto, \ncaracterizándose las fuentes sísmicas de la región. Para definir\
\ el sismo de \ndiseño, se revisa la sismicidad histórica e instrumental en la\
\ zona. El análisis"
- "[7]. VST Ingenieros Ltda. (2006). Informe Analisis de Estabilidad. \nDocumento\
\ N° 1005-IB-GE-IT-03, Preparado para Minera Las Cenizas \nS.A., Proyecto Ingeniería\
\ Básica Depó sito en Pasta, Planta Cabildo, \nSantiago. \n[8]. SYS Ingenieros\
\ Consultores Ltda. (2008). Estudio de Peligro Sísmico, \nEspectros de Respuesta\
\ y Generación de Registros Artificiales, para el \nDepósito en Pasta, Planta\
\ Cabildo, V Región. Documento N° SS-08011-\n01e, Preparado para Minera Las Cenizas\
\ S.A., Proyecto Ingeniería Básica \nDepósito en Pasta, Planta Cabildo, Santiago."
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
- pearson_euclidean
- spearman_euclidean
- pearson_manhattan
- spearman_manhattan
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
- cosine_accuracy
- cosine_accuracy_threshold
- cosine_f1
- cosine_f1_threshold
- cosine_precision
- cosine_recall
- cosine_ap
- euclidean_accuracy
- euclidean_accuracy_threshold
- euclidean_f1
- euclidean_f1_threshold
- euclidean_precision
- euclidean_recall
- euclidean_ap
- manhattan_accuracy
- manhattan_accuracy_threshold
- manhattan_f1
- manhattan_f1_threshold
- manhattan_precision
- manhattan_recall
- manhattan_ap
- dot_accuracy
- dot_accuracy_threshold
- dot_f1
- dot_f1_threshold
- dot_precision
- dot_recall
- dot_ap
- max_accuracy
- max_accuracy_threshold
- max_f1
- max_f1_threshold
- max_precision
- max_recall
- max_ap
model-index:
- name: SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts dev
type: sts_dev
metrics:
- type: pearson_cosine
value: 0.5831932863030391
name: Pearson Cosine
- type: spearman_cosine
value: 0.5906194729573898
name: Spearman Cosine
- type: pearson_euclidean
value: 0.5756994845513362
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.5906194729573898
name: Spearman Euclidean
- type: pearson_manhattan
value: 0.5770566145594384
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.5928811030247982
name: Spearman Manhattan
- type: pearson_dot
value: 0.5831933080761252
name: Pearson Dot
- type: spearman_dot
value: 0.5906193987619038
name: Spearman Dot
- type: pearson_max
value: 0.5831933080761252
name: Pearson Max
- type: spearman_max
value: 0.5928811030247982
name: Spearman Max
- task:
type: binary-classification
name: Binary Classification
dataset:
name: quora duplicates dev
type: quora_duplicates_dev
metrics:
- type: cosine_accuracy
value: 0.7702297702297702
name: Cosine Accuracy
- type: cosine_accuracy_threshold
value: 0.5348124504089355
name: Cosine Accuracy Threshold
- type: cosine_f1
value: 0.7889160554197229
name: Cosine F1
- type: cosine_f1_threshold
value: 0.46667712926864624
name: Cosine F1 Threshold
- type: cosine_precision
value: 0.7223880597014926
name: Cosine Precision
- type: cosine_recall
value: 0.8689407540394973
name: Cosine Recall
- type: cosine_ap
value: 0.8889059831012722
name: Cosine Ap
- type: euclidean_accuracy
value: 0.5554445554445554
name: Euclidean Accuracy
- type: euclidean_accuracy_threshold
value: -0.5359790325164795
name: Euclidean Accuracy Threshold
- type: euclidean_f1
value: 0.714193962748876
name: Euclidean F1
- type: euclidean_f1_threshold
value: -0.5359790325164795
name: Euclidean F1 Threshold
- type: euclidean_precision
value: 0.556
name: Euclidean Precision
- type: euclidean_recall
value: 0.9982046678635548
name: Euclidean Recall
- type: euclidean_ap
value: 0.383504836147922
name: Euclidean Ap
- type: manhattan_accuracy
value: 0.5554445554445554
name: Manhattan Accuracy
- type: manhattan_accuracy_threshold
value: -8.377983093261719
name: Manhattan Accuracy Threshold
- type: manhattan_f1
value: 0.714193962748876
name: Manhattan F1
- type: manhattan_f1_threshold
value: -8.377983093261719
name: Manhattan F1 Threshold
- type: manhattan_precision
value: 0.556
name: Manhattan Precision
- type: manhattan_recall
value: 0.9982046678635548
name: Manhattan Recall
- type: manhattan_ap
value: 0.383634100548808
name: Manhattan Ap
- type: dot_accuracy
value: 0.7702297702297702
name: Dot Accuracy
- type: dot_accuracy_threshold
value: 0.5348123908042908
name: Dot Accuracy Threshold
- type: dot_f1
value: 0.7889160554197229
name: Dot F1
- type: dot_f1_threshold
value: 0.46667706966400146
name: Dot F1 Threshold
- type: dot_precision
value: 0.7223880597014926
name: Dot Precision
- type: dot_recall
value: 0.8689407540394973
name: Dot Recall
- type: dot_ap
value: 0.8889059831012722
name: Dot Ap
- type: max_accuracy
value: 0.7702297702297702
name: Max Accuracy
- type: max_accuracy_threshold
value: 0.5348124504089355
name: Max Accuracy Threshold
- type: max_f1
value: 0.7889160554197229
name: Max F1
- type: max_f1_threshold
value: 0.46667712926864624
name: Max F1 Threshold
- type: max_precision
value: 0.7223880597014926
name: Max Precision
- type: max_recall
value: 0.9982046678635548
name: Max Recall
- type: max_ap
value: 0.8889059831012722
name: Max Ap
---
# SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) on the json dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision fa97f6e7cb1a59073dff9e6b13e2715cf7475ac9 -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("GbrlOl/finetune-embedding-all-MiniLM-L6-v2-geotechnical-test-v3")
# Run inference
sentences = [
'¿Cuál es la población total de la comuna de Catemu según el Censo 2017?',
'Plan de Cierre PAG Planta Catemu \nCompañía Explotadora de Minas (CEMIN) \n \n \n Rev. 0 | 20-04-18 25 | 158 \n3.5 Medio humano \n \nLa localidad más próxima a a la planta corresponde a la localidad del mismo nombre. Catemu , comuna \nperteneciente a la provincia de San Felipe de Aconcagua, de la Región de Valparaíso . La planta se encuentra \nubicada a 85 km al norte de Santiago y 95 km del puerto de Valparaíso. \n \nSegún información del Instituto Nacional de Estadísticas (I NE) para el Ce nso 2017, la población total de la \ncomuna Catemu es de 13.998 habitantes, correspondiendo a los totales de 6.982 hombres y 7.016 mujeres. \nEl número total de viviendas es 5.171 y densidad de población de 38,8 (Hab/km 2). \n \nLa actividad económica que predomina en la comuna de Catemu corresponde al sector de A gricultura, caza, \nganadería y silvicultura (correspondiendo a un total del 32,7%). Respecto a l a actividad minera , esta \nrepresenta solo el 2,8% de la mano de obra comunal. \n \nEl ingreso autónomo promedio del hogar es, en el caso de la comuna de Catemu, inferior al promedio \nregional ($618.371 para la región de Valparaíso), alcanzando los $415.146. Los niveles de pobreza son, según \nla encuesta CASEN del año 2009, son relativamente bajos, para la comuna de Catemu es de 8,17%, inferior al \npromedio regional (15,04%). Los niveles de desocupación se encuentran bajo el promedio regional (12% en \nla Región de Valparaíso) llegando a 9,5%.',
'[7]. VST Ingenieros Ltda. (2006). Informe Analisis de Estabilidad. \nDocumento N° 1005-IB-GE-IT-03, Preparado para Minera Las Cenizas \nS.A., Proyecto Ingeniería Básica Depó sito en Pasta, Planta Cabildo, \nSantiago. \n[8]. SYS Ingenieros Consultores Ltda. (2008). Estudio de Peligro Sísmico, \nEspectros de Respuesta y Generación de Registros Artificiales, para el \nDepósito en Pasta, Planta Cabildo, V Región. Documento N° SS-08011-\n01e, Preparado para Minera Las Cenizas S.A., Proyecto Ingeniería Básica \nDepósito en Pasta, Planta Cabildo, Santiago.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `sts_dev`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:-------------------|:-----------|
| pearson_cosine | 0.5832 |
| spearman_cosine | 0.5906 |
| pearson_euclidean | 0.5757 |
| spearman_euclidean | 0.5906 |
| pearson_manhattan | 0.5771 |
| spearman_manhattan | 0.5929 |
| pearson_dot | 0.5832 |
| spearman_dot | 0.5906 |
| pearson_max | 0.5832 |
| **spearman_max** | **0.5929** |
#### Binary Classification
* Dataset: `quora_duplicates_dev`
* Evaluated with [<code>BinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.BinaryClassificationEvaluator)
| Metric | Value |
|:-----------------------------|:-----------|
| cosine_accuracy | 0.7702 |
| cosine_accuracy_threshold | 0.5348 |
| cosine_f1 | 0.7889 |
| cosine_f1_threshold | 0.4667 |
| cosine_precision | 0.7224 |
| cosine_recall | 0.8689 |
| cosine_ap | 0.8889 |
| euclidean_accuracy | 0.5554 |
| euclidean_accuracy_threshold | -0.536 |
| euclidean_f1 | 0.7142 |
| euclidean_f1_threshold | -0.536 |
| euclidean_precision | 0.556 |
| euclidean_recall | 0.9982 |
| euclidean_ap | 0.3835 |
| manhattan_accuracy | 0.5554 |
| manhattan_accuracy_threshold | -8.378 |
| manhattan_f1 | 0.7142 |
| manhattan_f1_threshold | -8.378 |
| manhattan_precision | 0.556 |
| manhattan_recall | 0.9982 |
| manhattan_ap | 0.3836 |
| dot_accuracy | 0.7702 |
| dot_accuracy_threshold | 0.5348 |
| dot_f1 | 0.7889 |
| dot_f1_threshold | 0.4667 |
| dot_precision | 0.7224 |
| dot_recall | 0.8689 |
| dot_ap | 0.8889 |
| max_accuracy | 0.7702 |
| max_accuracy_threshold | 0.5348 |
| max_f1 | 0.7889 |
| max_f1_threshold | 0.4667 |
| max_precision | 0.7224 |
| max_recall | 0.9982 |
| **max_ap** | **0.8889** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 5,005 training samples
* Columns: <code>query</code>, <code>sentence</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | query | sentence | label |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 8 tokens</li><li>mean: 27.85 tokens</li><li>max: 66 tokens</li></ul> | <ul><li>min: 34 tokens</li><li>mean: 233.48 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>0: ~45.30%</li><li>1: ~54.70%</li></ul> |
* Samples:
| query | sentence | label |
|:-----------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------|
| <code>¿Cuál es la aceleración máxima obtenida para el sismo máximo probable según las fórmulas de atenuación de Ruiz y Saragoni (2005)?</code> | <code>Las aceleraciones máximas obtenidas para cada uno de los sismos de diseño considerados, se <br>determinaron a partir de las fórmulas de atenuación propuestas por Ruiz, S. y Saragoni, R. (2005), <br>alcanzando un valor de 0,79g para el sismo de operación y de 0,86g para el sismo máximo probable, <br>ver Figura 4.10.</code> | <code>1</code> |
| <code>¿Qué tipo de información estratégica no se identifica como de utilidad pública para la faena minera El Toqui?</code> | <code>PLAN DE CIERRE TEMPORAL – FAENA MINERA EL TOQUI <br> Sociedad Contractual Minera El Toqui <br>Capítulo 9 – Información Estratégica <br> <br> <br>REVISIÓN [0] <br>9-123 <br> <br>9. INFORMACIÓN ESTRATÉGICA <br>Para faena El Toqui , no se identifica información técnica que sea considerada de utilidad pública, tal <br>como la relativa la infraestructura, monumentos nacionales, según definición de la ley 17.288, sitios de <br>valor antropológico, arqueológico, histórico y, en general, los perte necientes al patrimonio <br>arquitectónico y natural, en el área de influencia del proyecto.</code> | <code>1</code> |
| <code>¿Qué condiciones se deben verificar al momento del cierre del tranque de relaves según el compromiso RES 1219-2013?</code> | <code>6 <br>1.2.3 Tranque de Relaves <br>Se incorporan los compromisos asociados a Sectoriales, desde el punto de vista de Estabilidad Física. <br>Los compromisos de Sectoriales asociados al Tranque de Relaves son los siguientes: <br>RES 1219-2013. Plan de Cierre 2009 <br>• Al momento del cierre, se verificarán que las condiciones de estabilidad de los taludes de los <br>muros estén de acuerdo a los coeficientes de sismicidad. Si esta condición no se cumple, se <br>evaluará la instalación de un muro de protección al pie del talud, así como la compactación <br>de berma de coronamiento. <br>• Se manejarán las aguas superficiales para asegurar estabilidad del depósito en el largo <br>plazo y el control de la erosión. Este manejo podrá incluir, entre otras, el reperfilamiento de la <br>superficie del depósito para permitir un drenaje natural positivo o una infiltración aceptable, la <br>evaporación de lagunas de aguas claras, un programa de manejo de cubeta, etc. <br>En función de que los compromisos adquiridos por Anglo American, O...</code> | <code>1</code> |
* Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "pairwise_cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 100
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 100
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | sts_dev_spearman_max | quora_duplicates_dev_max_ap |
|:-------:|:-----:|:-------------:|:--------------------:|:---------------------------:|
| 0 | 0 | - | 0.5929 | 0.8889 |
| 0.7937 | 100 | 5.4081 | - | - |
| 1.5794 | 200 | 4.5952 | - | - |
| 2.3651 | 300 | 3.8915 | - | - |
| 3.1508 | 400 | 3.397 | - | - |
| 3.9444 | 500 | 3.0268 | - | - |
| 4.7302 | 600 | 2.4922 | - | - |
| 5.5159 | 700 | 2.0998 | - | - |
| 6.3016 | 800 | 1.7355 | - | - |
| 7.0873 | 900 | 1.4673 | - | - |
| 7.8810 | 1000 | 1.3359 | - | - |
| 8.6667 | 1100 | 0.8865 | - | - |
| 9.4524 | 1200 | 0.9228 | - | - |
| 10.2381 | 1300 | 0.5653 | - | - |
| 11.0238 | 1400 | 0.6117 | - | - |
| 11.8175 | 1500 | 0.4088 | - | - |
| 12.6032 | 1600 | 0.4279 | - | - |
| 13.3889 | 1700 | 0.4085 | - | - |
| 14.1746 | 1800 | 0.2934 | - | - |
| 14.9683 | 1900 | 0.288 | - | - |
| 15.7540 | 2000 | 0.2059 | - | - |
| 16.5397 | 2100 | 0.2632 | - | - |
| 17.3254 | 2200 | 0.2341 | - | - |
| 18.1111 | 2300 | 0.2264 | - | - |
| 18.9048 | 2400 | 0.2186 | - | - |
| 19.6905 | 2500 | 0.1205 | - | - |
| 20.4762 | 2600 | 0.192 | - | - |
| 21.2619 | 2700 | 0.1249 | - | - |
| 22.0476 | 2800 | 0.132 | - | - |
| 22.8413 | 2900 | 0.1026 | - | - |
| 23.6270 | 3000 | 0.1111 | - | - |
| 24.4127 | 3100 | 0.117 | - | - |
| 25.1984 | 3200 | 0.0843 | - | - |
| 25.9921 | 3300 | 0.1367 | - | - |
| 26.7778 | 3400 | 0.1702 | - | - |
| 27.5635 | 3500 | 0.1249 | - | - |
| 28.3492 | 3600 | 0.0918 | - | - |
| 29.1349 | 3700 | 0.0203 | - | - |
| 29.9286 | 3800 | 0.0965 | - | - |
| 30.7143 | 3900 | 0.0638 | - | - |
| 31.5 | 4000 | 0.0965 | - | - |
| 32.2857 | 4100 | 0.0948 | - | - |
| 33.0714 | 4200 | 0.0115 | - | - |
| 33.8651 | 4300 | 0.0336 | - | - |
| 34.6508 | 4400 | 0.0784 | - | - |
| 35.4365 | 4500 | 0.0265 | - | - |
| 36.2222 | 4600 | 0.0127 | - | - |
| 37.0079 | 4700 | 0.02 | - | - |
| 37.8016 | 4800 | 0.0905 | - | - |
| 38.5873 | 4900 | 0.0184 | - | - |
| 39.3730 | 5000 | 0.0222 | - | - |
| 40.1587 | 5100 | 0.0341 | - | - |
| 40.9524 | 5200 | 0.0373 | - | - |
| 41.7381 | 5300 | 0.0154 | - | - |
| 42.5238 | 5400 | 0.0518 | - | - |
| 43.3095 | 5500 | 0.0225 | - | - |
| 44.0952 | 5600 | 0.0355 | - | - |
| 44.8889 | 5700 | 0.0088 | - | - |
| 45.6746 | 5800 | 0.0143 | - | - |
| 46.4603 | 5900 | 0.0274 | - | - |
| 47.2460 | 6000 | 0.0104 | - | - |
| 48.0317 | 6100 | 0.0142 | - | - |
| 48.8254 | 6200 | 0.0032 | - | - |
| 49.6111 | 6300 | 0.0139 | - | - |
| 50.3968 | 6400 | 0.0328 | - | - |
| 51.1825 | 6500 | 0.0011 | - | - |
| 51.9762 | 6600 | 0.0051 | - | - |
| 52.7619 | 6700 | 0.0016 | - | - |
| 53.5476 | 6800 | 0.0032 | - | - |
| 54.3333 | 6900 | 0.0018 | - | - |
| 55.1190 | 7000 | 0.004 | - | - |
| 55.9127 | 7100 | 0.0023 | - | - |
| 56.6984 | 7200 | 0.0011 | - | - |
| 57.4841 | 7300 | 0.0009 | - | - |
| 58.2698 | 7400 | 0.0042 | - | - |
| 59.0556 | 7500 | 0.0018 | - | - |
| 59.8492 | 7600 | 0.001 | - | - |
| 60.6349 | 7700 | 0.0004 | - | - |
| 61.4206 | 7800 | 0.0074 | - | - |
| 62.2063 | 7900 | 0.003 | - | - |
| 63.0 | 8000 | 0.0007 | - | - |
| 63.7857 | 8100 | 0.0013 | - | - |
| 64.5714 | 8200 | 0.002 | - | - |
| 65.3571 | 8300 | 0.0007 | - | - |
| 66.1429 | 8400 | 0.0004 | - | - |
| 66.9365 | 8500 | 0.0006 | - | - |
| 67.7222 | 8600 | 0.0007 | - | - |
| 68.5079 | 8700 | 0.0051 | - | - |
| 69.2937 | 8800 | 0.0001 | - | - |
| 70.0794 | 8900 | 0.0006 | - | - |
| 70.8730 | 9000 | 0.0001 | - | - |
| 71.6587 | 9100 | 0.0002 | - | - |
| 72.4444 | 9200 | 0.0001 | - | - |
| 73.2302 | 9300 | 0.0003 | - | - |
| 74.0159 | 9400 | 0.0002 | - | - |
| 74.8095 | 9500 | 0.0002 | - | - |
| 75.5952 | 9600 | 0.0006 | - | - |
| 76.3810 | 9700 | 0.0 | - | - |
| 77.1667 | 9800 | 0.0001 | - | - |
| 77.9603 | 9900 | 0.0002 | - | - |
| 78.7460 | 10000 | 0.0 | - | - |
| 79.5317 | 10100 | 0.0001 | - | - |
| 80.3175 | 10200 | 0.0002 | - | - |
| 81.1032 | 10300 | 0.0 | - | - |
| 81.8968 | 10400 | 0.0001 | - | - |
| 82.6825 | 10500 | 0.0001 | - | - |
| 83.4683 | 10600 | 0.0 | - | - |
| 84.2540 | 10700 | 0.0001 | - | - |
| 85.0397 | 10800 | 0.0 | - | - |
| 85.8333 | 10900 | 0.0001 | - | - |
| 86.6190 | 11000 | 0.0001 | - | - |
| 87.4048 | 11100 | 0.0001 | - | - |
| 88.1905 | 11200 | 0.0 | - | - |
| 88.9841 | 11300 | 0.0001 | - | - |
| 89.7698 | 11400 | 0.0001 | - | - |
| 90.5556 | 11500 | 0.0001 | - | - |
| 91.3413 | 11600 | 0.0001 | - | - |
| 92.1270 | 11700 | 0.0 | - | - |
| 92.9206 | 11800 | 0.0 | - | - |
| 93.7063 | 11900 | 0.0 | - | - |
| 94.4921 | 12000 | 0.0 | - | - |
| 95.2778 | 12100 | 0.0001 | - | - |
| 96.0635 | 12200 | 0.0 | - | - |
| 96.8571 | 12300 | 0.0 | - | - |
| 97.6429 | 12400 | 0.0 | - | - |
| 98.4286 | 12500 | 0.0 | - | - |
| 99.2143 | 12600 | 0.0 | - | - |
</details>
### Framework Versions
- Python: 3.10.16
- Sentence Transformers: 3.3.1
- Transformers: 4.48.1
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CoSENTLoss
```bibtex
@online{kexuefm-8847,
title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
author={Su Jianlin},
year={2022},
month={Jan},
url={https://kexue.fm/archives/8847},
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
csm9493/17_three_dataset_cot_lora_qved_r16_alpha32_lr_5e5_decay_1e2_cosine_epoch_1_mbs_16 | csm9493 | "2025-03-05T00:47:25Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2025-03-05T00:45:25Z" | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/YamshadowInex12_Experiment26Shadow-GGUF | mradermacher | "2024-12-28T18:14:22Z" | 14 | 0 | transformers | [
"transformers",
"gguf",
"Safetensors",
"text-generation-inference",
"merge",
"en",
"base_model:MaziyarPanahi/YamshadowInex12_Experiment26Shadow",
"base_model:quantized:MaziyarPanahi/YamshadowInex12_Experiment26Shadow",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-12-28T17:35:21Z" | ---
base_model: MaziyarPanahi/YamshadowInex12_Experiment26Shadow
language:
- en
library_name: transformers
license: apache-2.0
model_creator: MaziyarPanahi
model_name: YamshadowInex12_Experiment26Shadow
quantized_by: mradermacher
tags:
- Safetensors
- text-generation-inference
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/MaziyarPanahi/YamshadowInex12_Experiment26Shadow
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/YamshadowInex12_Experiment26Shadow-GGUF/resolve/main/YamshadowInex12_Experiment26Shadow.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/YamshadowInex12_Experiment26Shadow-GGUF/resolve/main/YamshadowInex12_Experiment26Shadow.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/YamshadowInex12_Experiment26Shadow-GGUF/resolve/main/YamshadowInex12_Experiment26Shadow.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/YamshadowInex12_Experiment26Shadow-GGUF/resolve/main/YamshadowInex12_Experiment26Shadow.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/YamshadowInex12_Experiment26Shadow-GGUF/resolve/main/YamshadowInex12_Experiment26Shadow.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/YamshadowInex12_Experiment26Shadow-GGUF/resolve/main/YamshadowInex12_Experiment26Shadow.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/YamshadowInex12_Experiment26Shadow-GGUF/resolve/main/YamshadowInex12_Experiment26Shadow.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/YamshadowInex12_Experiment26Shadow-GGUF/resolve/main/YamshadowInex12_Experiment26Shadow.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/YamshadowInex12_Experiment26Shadow-GGUF/resolve/main/YamshadowInex12_Experiment26Shadow.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/YamshadowInex12_Experiment26Shadow-GGUF/resolve/main/YamshadowInex12_Experiment26Shadow.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/YamshadowInex12_Experiment26Shadow-GGUF/resolve/main/YamshadowInex12_Experiment26Shadow.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/YamshadowInex12_Experiment26Shadow-GGUF/resolve/main/YamshadowInex12_Experiment26Shadow.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mujtabakk/lab1_random | mujtabakk | "2025-02-12T23:27:33Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-02-12T21:42:14Z" | ---
library_name: transformers
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-fr
tags:
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: random-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 0.019579731793573682
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# random-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 9.5890
- Model Preparation Time: 0.0105
- Bleu: 0.0196
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- training_steps: 100
### Training results
### Framework versions
- Transformers 4.48.2
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
nghiatrannnnnn/24ca268d-c126-4b2e-bdf0-2097b6eb60d8 | nghiatrannnnnn | "2025-01-24T09:03:10Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-9b",
"base_model:adapter:unsloth/gemma-2-9b",
"license:gemma",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-24T08:34:17Z" | ---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-9b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 24ca268d-c126-4b2e-bdf0-2097b6eb60d8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-9b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 4cf5d702a334bf3d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4cf5d702a334bf3d_train_data.json
type:
field_input: Indonesian
field_instruction: Japanese
field_output: English
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nghiatrannnnnn/24ca268d-c126-4b2e-bdf0-2097b6eb60d8
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/4cf5d702a334bf3d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 806c5872-5a58-4985-92fe-c35dcfe76972
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 806c5872-5a58-4985-92fe-c35dcfe76972
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 24ca268d-c126-4b2e-bdf0-2097b6eb60d8
This model is a fine-tuned version of [unsloth/gemma-2-9b](https://huggingface.co/unsloth/gemma-2-9b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6094
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5575 | 0.0496 | 200 | 0.6094 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
KingKazma/xsum_t5-small_p_tuning_500_10_3000_8_e9_s55555_v4_l4_v100 | KingKazma | "2023-08-13T14:24:41Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-08-13T14:24:40Z" | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
ArthurZ/mamba-130m | ArthurZ | "2024-02-19T12:06:29Z" | 241 | 3 | transformers | [
"transformers",
"safetensors",
"mamba",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-16T06:43:33Z" | ---
library_name: transformers
tags: []
---
```python
>>> from transformers import MambaConfig, MambaForCausalLM, AutoTokenizer
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b", padding_side = "left")
>>> tokenizer.pad_token = tokenizer.eos_token
>>> model = MambaForCausalLM.from_pretrained("state-spaces/mamba-130m", vocab_size=50280, num_hidden_layers=24, torch_dtype=torch.float32)
>>> model.config.use_cache = True
>>> input_ids = tokenizer(["Hey how are you doing?", "Explain how soy sauce is made"], padding=True, return_tensors= "pt")["input_ids"]
>>> out = model.generate(input_ids, max_new_tokens=10)
>>> print(tokenizer.batch_decode(out))
["<|endoftext|>Hey how are you doing?\n\nI'm a newbie to the game", 'Explain how soy sauce is made.\n\n1. Add the soy sauce to']
```
|
sajjadhadi/Disease-Diagnosis-DeepSeek-R1-Distill-Llama-8B | sajjadhadi | "2025-03-06T23:08:48Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"medical",
"diagnosis",
"generated_from_trainer",
"dataset:sajjadhadi/disease-diagnosis-dataset",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"license:apache-2.0",
"region:us"
] | null | "2025-03-06T21:04:29Z" | ---
library_name: peft
license: apache-2.0
base_model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B
tags:
- trl
- sft
- medical
- diagnosis
- generated_from_trainer
model-index:
- name: sajjadhadi-Disease-Diagnosis-DeepSeek-R1-Distill-Llama-8B
results: []
datasets:
- sajjadhadi/disease-diagnosis-dataset
metrics:
- accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. -->
## Disclaimer
**Important Notice**: This model is a research tool for disease diagnosis and is **NOT INTENDED** for clinical or medical use. It is designed for educational and experimental purposes only. The model's outputs should **NOT** be used to make medical decisions, diagnose conditions, or guide treatment. Always consult a qualified healthcare professional for medical advice.
The developers and contributors of this model are not responsible for any misuse or consequences arising from its application in medical contexts. Use this model responsibly and in compliance with ethical guidelines.
---
## Model Description
This model is a fine-tuned version of **DeepSeek-R1-Distill-Llama-8B**, adapted for disease diagnosis research. It leverages **LoRA (Low-Rank Adaptation)** to efficiently fine-tune the base model on a specialized dataset. The model is designed to analyze symptom descriptions and provide diagnostic suggestions in a structured format.
### Key Features:
- **Base Model**: `deepseek-ai/DeepSeek-R1-Distill-Llama-8B`
- **Fine-Tuning Method**: LoRA (Low-Rank Adaptation)
- **Training Framework**: PEFT (Parameter-Efficient Fine-Tuning)
- **Intended Use**: Research and educational applications in medical diagnosis.
---
## Intended Uses & Limitations
### Intended Uses:
- **Research**: Study of AI applications in medical diagnosis.
- **Education**: Simulation of diagnostic scenarios for training purposes.
- **Prototyping**: Development of AI-assisted diagnostic tools (non-clinical).
### Limitations:
- **Not for Clinical Use**: This model is not validated for real-world medical applications.
- **Data Dependency**: The model's performance depends on the quality and scope of its training data.
- **Ethical Concerns**: The model may generate incomplete or inaccurate suggestions. Always verify outputs with medical professionals.
---
## Training and Evaluation Data
The model was fine-tuned on a dataset containing symptom-disease mappings. The dataset includes:
- **Symptom Descriptions**: Textual descriptions of patient symptoms.
- **Disease Labels**: Corresponding disease classifications based on symptoms.
The dataset was preprocessed and tokenized to ensure compatibility with the base model's architecture. Specific details about the dataset size and composition are not disclosed.
---
## Training Procedure
### Training Hyperparameters:
| Parameter | Value |
|-----------|-------|
| Learning Rate | 1e-4 |
| Batch Size | 64 |
| Evaluation Batch Size | 8 |
| Optimizer | Paged AdamW 32-bit |
| Scheduler | Cosine with 3% Warmup |
| Epochs | 1 |
| Seed | 42 |
### Technical Stack:
- **PEFT**: 0.14.0
- **Transformers**: 4.49.0
- **PyTorch**: 2.6.0+cu124
- **Datasets**: 3.3.2
- **Tokenizers**: 0.21.0
---
## Ethical Considerations
### Responsible Use:
- **Transparency**: Users should be aware of the model's limitations and intended use cases.
- **Bias Mitigation**: The model may inherit biases from its training data. Careful evaluation is required.
- **Privacy**: No real patient data was used in training.
### Prohibited Uses:
- Clinical diagnosis or treatment decisions.
- Self-diagnosis tools for patients.
- Applications that could harm individuals or communities.
---
## Acknowledgments
This model was developed using the **DeepSeek-R1-Distill-Llama-8B** base model and fine-tuned with the **PEFT** library. Special thanks to the open-source community for their contributions to AI research.
---
**Note**: This model is a work in progress. Further evaluation and documentation will be provided in future updates. |
Lettria/grag-go-idf-contrastive_8083-v2-trial-6 | Lettria | "2025-03-11T12:24:41Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"tensorboard",
"onnx",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:4861",
"loss:ContrastiveLoss",
"arxiv:1908.10084",
"base_model:intfloat/multilingual-e5-base",
"base_model:quantized:intfloat/multilingual-e5-base",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2025-03-11T12:23:29Z" | ---
base_model: intfloat/multilingual-e5-base
library_name: sentence-transformers
metrics:
- cosine_accuracy
- cosine_accuracy_threshold
- cosine_f1
- cosine_f1_threshold
- cosine_precision
- cosine_recall
- cosine_ap
- cosine_mcc
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:4861
- loss:ContrastiveLoss
widget:
- source_sentence: 'Type de project: Actions de valorisation (expos physiques ou virtuelles,
journées d’étude, site internet, publications, documentaires…),Outils de médiation (cartes
et itinéraires papier ou numériques, livrets de visite, outils numériques, multimédia,
parcours d’interprétation…),Dispositifs pédagogiques (mallettes pédagogiques,
Moocs, supports de visite à destination des jeunes…),Événements rayonnant à l’échelle
de l’Île-de-France. Une attention particulière sera portée à la qualité des contenus,
à l’originalité et la pertinence des outils ou actions proposés, et à leur adéquation
avec les publics ciblés.'
sentences:
- '''Actions de valorisation'':projet|ÉVALUÉ_PAR|''adéquation avec les publics ciblés'':critère'
- '''mesdemarches.iledefrance.fr'':plateforme|ACCEPTE_DEMANDE|''Association - Fondation'':entité'
- '''projets de coopération'':projet|IMPLIQUE|''agriculteur cédant'':personne'
- source_sentence: 'Description: Cet appel à projets vise à soutenir les structures
en investissement qui agissent en faveur des jeunes en situation de précarité,
suite à une rupture familiale ou sociale pouvant entraîner de graves conséquences
sur leur santé ou leur sécurité.
Thèmes: Santé & Social : Solidarité
Nature de l''aide: Les dépenses éligibles se composent de dépenses de fonctionnement
exclusivement imputables à la mise en œuvre des projets retenus dans le cadre
de ce dispositif. La subvention régionale est fixée à 50 % maximum de la dépense
subventionnable (total des dépenses éligibles), dans la limite d’un plafond de
subvention fixé à 75 000 € maximum.
Délibération cadre: CR 100-16 du 22 septembre 2016 / CP 2018-428 du 17 octobre
2018'
sentences:
- '''C''POSSIBLE'':programme|FAVORISE_INSERTION_PROFESSIONNELLE|''lycéens'':groupe'
- '''Date de début'':concept|EST|''non précisée'':__inferred__'
- '''subvention régionale'':aide|LIMITE|''appel à projets'':projet'
- source_sentence: 'Type de project: Le programme propose des rencontres le samedi
après-midi dans une université ou une grande école réputée, entre les professionnels
bénévoles et les lycéens et collégiens sous la forme d''atelier thématiques. Ces
moments de rencontre touchent à une grande multitude de domaines d’activités. L''objectif
est de donner l’opportunité aux jeunes les plus enclavés d’échanger avec des intervenants
professionnels aux parcours atypiques et inspirants. Les intervenants suscitent
les ambitions et élargissent les perspectives des élèves.'
sentences:
- '''concours'':événement|CIBLE|''jeunes'':groupe'
- '''projets'':__inferred__|TÉLÉCHARGER_ET_REMPLIR|''charte des valeurs de la République
et de la laïcité'':document'
- '''programme'':initiative|IMPLIQUE|''lycéens'':groupe'
- source_sentence: 'Type de project: Le Prix des Innovateurs vise à encourager, soutenir
et valoriser la recherche, le transfert de technologie et l’émergence d’innovations
en santé dont l’impact sociétal et de santé publique est remarquable. Ce prix
a ainsi vocation à : Contribuer à la reconnaissance d’un chercheur et de son
équipe menant des recherches dans le secteur de la santé,Encourager la création
de spin-off de laboratoires académiques en garantissant les meilleures conditions
d’essaimage notamment par l’acquisition des compétences requises par l’ensemble
des membres de l’équipe,Renforcer'
sentences:
- '''2nde session de dépôt'':session|diffusion prévue|''diffusion à partir de novembre
2025'':__inferred__'
- '''chercheur'':personne|DIRIGE|''équipe de recherche'':groupe'
- '''Collectivité ou institution - Communes de > 20 000 hab'':organisation|éligible
pour|''dépôt des demandes de subvention'':procédure'
- source_sentence: 'Date de début: non précisée
Date de fin (clôture): non précisée
Date de début de la future campagne: non précisée'
sentences:
- '''Date de fin'':concept|EST|''Lundi 18 Novembre 2024'':__inferred__'
- '''Région IDF'':organisation|PROPOSE|''Grands Lieux d''Innovation'':programme'
- '''Date de fin'':concept|EST|''non précisée'':__inferred__'
model-index:
- name: SentenceTransformer based on intfloat/multilingual-e5-base
results:
- task:
type: binary-classification
name: Binary Classification
dataset:
name: BinaryClassifEval
type: BinaryClassifEval
metrics:
- type: cosine_accuracy
value: 0.7058340180772391
name: Cosine Accuracy
- type: cosine_accuracy_threshold
value: 0.793916642665863
name: Cosine Accuracy Threshold
- type: cosine_f1
value: 0.7171875
name: Cosine F1
- type: cosine_f1_threshold
value: 0.7811518907546997
name: Cosine F1 Threshold
- type: cosine_precision
value: 0.6912650602409639
name: Cosine Precision
- type: cosine_recall
value: 0.7451298701298701
name: Cosine Recall
- type: cosine_ap
value: 0.7612878163621353
name: Cosine Ap
- type: cosine_mcc
value: 0.4056919853026572
name: Cosine Mcc
---
# SentenceTransformer based on intfloat/multilingual-e5-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) <!-- at revision 835193815a3936a24a0ee7dc9e3d48c1fbb19c55 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Lettria/grag-go-idf-contrastive_8083-v2-trial-6")
# Run inference
sentences = [
'Date de début: non précisée\nDate de fin (clôture): non précisée\nDate de début de la future campagne: non précisée',
"'Date de fin':concept|EST|'non précisée':__inferred__",
"'Date de fin':concept|EST|'Lundi 18 Novembre 2024':__inferred__",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Binary Classification
* Dataset: `BinaryClassifEval`
* Evaluated with [<code>BinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.BinaryClassificationEvaluator)
| Metric | Value |
|:--------------------------|:-----------|
| cosine_accuracy | 0.7058 |
| cosine_accuracy_threshold | 0.7939 |
| cosine_f1 | 0.7172 |
| cosine_f1_threshold | 0.7812 |
| cosine_precision | 0.6913 |
| cosine_recall | 0.7451 |
| **cosine_ap** | **0.7613** |
| cosine_mcc | 0.4057 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 4,861 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:-------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------|
| type | string | string | int |
| details | <ul><li>min: 26 tokens</li><li>mean: 191.64 tokens</li><li>max: 429 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 31.2 tokens</li><li>max: 72 tokens</li></ul> | <ul><li>1: 100.00%</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------|:---------------|
| <code>Type de project: L’excès de précipitations tout au long de l’année a conduit à une chute spectaculaire des rendements des céréales d’été et des protéagineux (blé, orge, pois, féverole, etc.) que produisent 90% des agriculteurs d’Île-de-France, historique grenier à blé du pays. Tributaires naturels du fleurissement des cultures, les apiculteurs professionnels de la région ont également souffert de ces dérèglements climatiques.La Région accompagne les exploitations concernées en leur apportant une aide exceptionnelle.</code> | <code>'excès de précipitations':phénomène|DIMINUE|'rendements des protéagineux':concept</code> | <code>1</code> |
| <code>Type de project: Dans le cadre de sa stratégie « Impact 2028 », la Région s’engage dans la défense de la souveraineté industrielle en renforçant son soutien à une industrie circulaire et décarbonée, porteuse d’innovations et créatrice d’emplois. PM'up Jeunes pousses industrielles soutient les projets d’implantation d’une première usine tournée vers la décarbonation, l’efficacité énergétique et la circularité des processus de production. Ces projets peuvent prendre l'une de ces formes : Une première unité de production industrielle, après une phase de prototypage,Une ligne pilote de production industrielle, en interne ou chez un tiers situé en Île-de-France, à condition que sa production soit destinée à de premières commercialisations,La transformation d’une unité de production pilote à une unité de production industrielle</code> | <code>'Région Île-de-France':organisation|soutient|'industrie décarbonée':concept</code> | <code>1</code> |
| <code>Procédures et démarches: Le dépôt des demandes de subvention se fait en ligne sur la plateforme régionale mesdemarches.iledefrance.fr : Session de dépôt unique pour les nouvelles demandes : du 30 septembre au 4 novembre 2024 (11 heures) pour des festivals qui se déroulent entre le 1er mars 2025 et le 28 février 2026 (vote à la CP de mars 2025). Pour les demandes de renouvellement, un mail est envoyé aux structures concernées par le service du Spectacle vivant en amont de chaque session de dépôt.<br>Bénéficiaires: Professionnel - Culture, Association - Fondation, Association - Régie par la loi de 1901, Association - ONG, Collectivité ou institution - Communes de 10 000 à 20 000 hab, Collectivité ou institution - Autre (GIP, copropriété, EPA...), Collectivité ou institution - Communes de 2000 à 10 000 hab, Collectivité ou institution - Communes de < 2000 hab, Collectivité ou institution - Communes de > 20 000 hab, Collectivité ou institution - Département, Collectivité ou institution - EPC...</code> | <code>'Collectivité ou institution - EPCI':bénéficiaire|PEUT_BÉNÉFICIER|'demandes de subvention':procédure</code> | <code>1</code> |
* Loss: [<code>ContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#contrastiveloss) with these parameters:
```json
{
"distance_metric": "SiameseDistanceMetric.COSINE_DISTANCE",
"margin": 0.5,
"size_average": true
}
```
### Evaluation Dataset
#### json
* Dataset: json
* Size: 1,217 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 24 tokens</li><li>mean: 188.47 tokens</li><li>max: 394 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 31.22 tokens</li><li>max: 133 tokens</li></ul> | <ul><li>0: ~38.40%</li><li>1: ~61.60%</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------|
| <code>Type de project: Le programme propose des rencontres le samedi après-midi dans une université ou une grande école réputée, entre les professionnels bénévoles et les lycéens et collégiens sous la forme d'atelier thématiques. Ces moments de rencontre touchent à une grande multitude de domaines d’activités. L'objectif est de donner l’opportunité aux jeunes les plus enclavés d’échanger avec des intervenants professionnels aux parcours atypiques et inspirants. Les intervenants suscitent les ambitions et élargissent les perspectives des élèves.</code> | <code>'rencontres':événement|impliquent|'professionnels bénévoles':groupe</code> | <code>1</code> |
| <code>Précision sure les bénéficiaires: Communes,Établissements publics de coopération intercommunale (avec ou sans fiscalité propre),Établissements publics territoriaux franciliens,Départements,Aménageurs publics et privés (lorsque ces derniers interviennent à la demande ou pour le compte d'une collectivité précitée).</code> | <code>'Aménageurs privés':entité|INTERVIENT_POUR|'Départements':entité</code> | <code>1</code> |
| <code>Date de début: non précisée<br>Date de fin (clôture): non précisée<br>Date de début de la future campagne: non précisée</code> | <code>'Date de fin':concept|EST|'non précisée':__inferred__</code> | <code>1</code> |
* Loss: [<code>ContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#contrastiveloss) with these parameters:
```json
{
"distance_metric": "SiameseDistanceMetric.COSINE_DISTANCE",
"margin": 0.5,
"size_average": true
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 4
- `per_device_eval_batch_size`: 4
- `gradient_accumulation_steps`: 2
- `learning_rate`: 2.0007927807284357e-05
- `num_train_epochs`: 10
- `lr_scheduler_type`: cosine
- `warmup_steps`: 320
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `hub_model_id`: Lettria/grag-go-idf-contrastive_8083-v2-trial-6
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 4
- `per_device_eval_batch_size`: 4
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 2
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2.0007927807284357e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 320
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: Lettria/grag-go-idf-contrastive_8083-v2-trial-6
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | Validation Loss | BinaryClassifEval_cosine_ap |
|:-------:|:-------:|:-------------:|:---------------:|:---------------------------:|
| 0.0658 | 40 | 0.0617 | - | - |
| 0.1316 | 80 | 0.0552 | - | - |
| 0.1974 | 120 | 0.0538 | - | - |
| 0.2632 | 160 | 0.0488 | - | - |
| 0.3289 | 200 | 0.0498 | - | - |
| 0.3947 | 240 | 0.0458 | - | - |
| 0.4605 | 280 | 0.0425 | - | - |
| 0.5263 | 320 | 0.0398 | - | - |
| 0.5921 | 360 | 0.0403 | - | - |
| 0.6579 | 400 | 0.0377 | - | - |
| 0.7237 | 440 | 0.0339 | - | - |
| 0.7895 | 480 | 0.0372 | - | - |
| 0.8553 | 520 | 0.0364 | - | - |
| 0.9211 | 560 | 0.0358 | - | - |
| 0.9868 | 600 | 0.0326 | - | - |
| **1.0** | **608** | **-** | **0.0268** | **0.7613** |
| 1.0526 | 640 | 0.0335 | - | - |
| 1.1184 | 680 | 0.0296 | - | - |
| 1.1842 | 720 | 0.0273 | - | - |
| 1.25 | 760 | 0.0253 | - | - |
| 1.3158 | 800 | 0.0249 | - | - |
| 1.3816 | 840 | 0.0276 | - | - |
| 1.4474 | 880 | 0.0255 | - | - |
| 1.5132 | 920 | 0.0204 | - | - |
| 1.5789 | 960 | 0.026 | - | - |
| 1.6447 | 1000 | 0.0202 | - | - |
| 1.7105 | 1040 | 0.0224 | - | - |
| 1.7763 | 1080 | 0.0246 | - | - |
| 1.8421 | 1120 | 0.0249 | - | - |
| 1.9079 | 1160 | 0.0214 | - | - |
| 1.9737 | 1200 | 0.0212 | - | - |
| 2.0 | 1216 | - | 0.0286 | 0.7398 |
| 2.0395 | 1240 | 0.0181 | - | - |
| 2.1053 | 1280 | 0.0156 | - | - |
| 2.1711 | 1320 | 0.0142 | - | - |
| 2.2368 | 1360 | 0.0189 | - | - |
| 2.3026 | 1400 | 0.0154 | - | - |
| 2.3684 | 1440 | 0.0184 | - | - |
| 2.4342 | 1480 | 0.0144 | - | - |
| 2.5 | 1520 | 0.0181 | - | - |
| 2.5658 | 1560 | 0.0154 | - | - |
| 2.6316 | 1600 | 0.0144 | - | - |
| 2.6974 | 1640 | 0.0175 | - | - |
| 2.7632 | 1680 | 0.0133 | - | - |
| 2.8289 | 1720 | 0.0163 | - | - |
| 2.8947 | 1760 | 0.012 | - | - |
| 2.9605 | 1800 | 0.0168 | - | - |
| 3.0 | 1824 | - | 0.0296 | 0.7407 |
| 3.0263 | 1840 | 0.0125 | - | - |
| 3.0921 | 1880 | 0.0115 | - | - |
| 3.1579 | 1920 | 0.0102 | - | - |
| 3.2237 | 1960 | 0.0097 | - | - |
| 3.2895 | 2000 | 0.0101 | - | - |
| 3.3553 | 2040 | 0.0104 | - | - |
| 3.4211 | 2080 | 0.0105 | - | - |
| 3.4868 | 2120 | 0.0105 | - | - |
| 3.5526 | 2160 | 0.0104 | - | - |
| 3.6184 | 2200 | 0.0088 | - | - |
| 3.6842 | 2240 | 0.0109 | - | - |
| 3.75 | 2280 | 0.0123 | - | - |
| 3.8158 | 2320 | 0.0102 | - | - |
| 3.8816 | 2360 | 0.0099 | - | - |
| 3.9474 | 2400 | 0.0103 | - | - |
| 4.0 | 2432 | - | 0.0294 | 0.7537 |
| 4.0132 | 2440 | 0.0093 | - | - |
| 4.0789 | 2480 | 0.0067 | - | - |
| 4.1447 | 2520 | 0.0083 | - | - |
| 4.2105 | 2560 | 0.0081 | - | - |
| 4.2763 | 2600 | 0.0083 | - | - |
| 4.3421 | 2640 | 0.0059 | - | - |
| 4.4079 | 2680 | 0.008 | - | - |
| 4.4737 | 2720 | 0.0078 | - | - |
| 4.5395 | 2760 | 0.0062 | - | - |
| 4.6053 | 2800 | 0.0064 | - | - |
| 4.6711 | 2840 | 0.0051 | - | - |
| 4.7368 | 2880 | 0.0059 | - | - |
| 4.8026 | 2920 | 0.0074 | - | - |
| 4.8684 | 2960 | 0.0068 | - | - |
| 4.9342 | 3000 | 0.0082 | - | - |
| 5.0 | 3040 | 0.0085 | 0.0319 | 0.7341 |
| 5.0658 | 3080 | 0.004 | - | - |
| 5.1316 | 3120 | 0.0049 | - | - |
| 5.1974 | 3160 | 0.005 | - | - |
| 5.2632 | 3200 | 0.0059 | - | - |
| 5.3289 | 3240 | 0.005 | - | - |
| 5.3947 | 3280 | 0.0047 | - | - |
| 5.4605 | 3320 | 0.0044 | - | - |
| 5.5263 | 3360 | 0.0046 | - | - |
| 5.5921 | 3400 | 0.0044 | - | - |
| 5.6579 | 3440 | 0.0065 | - | - |
| 5.7237 | 3480 | 0.0054 | - | - |
| 5.7895 | 3520 | 0.0062 | - | - |
| 5.8553 | 3560 | 0.0054 | - | - |
| 5.9211 | 3600 | 0.0041 | - | - |
| 5.9868 | 3640 | 0.0048 | - | - |
| 6.0 | 3648 | - | 0.0336 | 0.7182 |
| 6.0526 | 3680 | 0.0035 | - | - |
| 6.1184 | 3720 | 0.0029 | - | - |
| 6.1842 | 3760 | 0.0033 | - | - |
| 6.25 | 3800 | 0.0048 | - | - |
| 6.3158 | 3840 | 0.0058 | - | - |
| 6.3816 | 3880 | 0.0037 | - | - |
| 6.4474 | 3920 | 0.0035 | - | - |
| 6.5132 | 3960 | 0.0043 | - | - |
| 6.5789 | 4000 | 0.004 | - | - |
| 6.6447 | 4040 | 0.0026 | - | - |
| 6.7105 | 4080 | 0.0055 | - | - |
| 6.7763 | 4120 | 0.0031 | - | - |
| 6.8421 | 4160 | 0.0037 | - | - |
| 6.9079 | 4200 | 0.0036 | - | - |
| 6.9737 | 4240 | 0.0046 | - | - |
| 7.0 | 4256 | - | 0.0338 | 0.7097 |
| 7.0395 | 4280 | 0.0027 | - | - |
| 7.1053 | 4320 | 0.0026 | - | - |
| 7.1711 | 4360 | 0.0034 | - | - |
| 7.2368 | 4400 | 0.0039 | - | - |
| 7.3026 | 4440 | 0.0023 | - | - |
| 7.3684 | 4480 | 0.0034 | - | - |
| 7.4342 | 4520 | 0.0022 | - | - |
| 7.5 | 4560 | 0.0045 | - | - |
| 7.5658 | 4600 | 0.0027 | - | - |
| 7.6316 | 4640 | 0.0036 | - | - |
| 7.6974 | 4680 | 0.0031 | - | - |
| 7.7632 | 4720 | 0.0018 | - | - |
| 7.8289 | 4760 | 0.0019 | - | - |
| 7.8947 | 4800 | 0.0029 | - | - |
| 7.9605 | 4840 | 0.0033 | - | - |
| 8.0 | 4864 | - | 0.0338 | 0.7093 |
| 8.0263 | 4880 | 0.0029 | - | - |
| 8.0921 | 4920 | 0.0023 | - | - |
| 8.1579 | 4960 | 0.0026 | - | - |
| 8.2237 | 5000 | 0.0026 | - | - |
| 8.2895 | 5040 | 0.0025 | - | - |
| 8.3553 | 5080 | 0.0033 | - | - |
| 8.4211 | 5120 | 0.0031 | - | - |
| 8.4868 | 5160 | 0.0025 | - | - |
| 8.5526 | 5200 | 0.0025 | - | - |
| 8.6184 | 5240 | 0.0022 | - | - |
| 8.6842 | 5280 | 0.002 | - | - |
| 8.75 | 5320 | 0.0025 | - | - |
| 8.8158 | 5360 | 0.0018 | - | - |
| 8.8816 | 5400 | 0.0018 | - | - |
| 8.9474 | 5440 | 0.0031 | - | - |
| 9.0 | 5472 | - | 0.0342 | 0.7133 |
| 9.0132 | 5480 | 0.002 | - | - |
| 9.0789 | 5520 | 0.0026 | - | - |
| 9.1447 | 5560 | 0.0017 | - | - |
| 9.2105 | 5600 | 0.003 | - | - |
| 9.2763 | 5640 | 0.002 | - | - |
| 9.3421 | 5680 | 0.0019 | - | - |
| 9.4079 | 5720 | 0.0022 | - | - |
| 9.4737 | 5760 | 0.0018 | - | - |
| 9.5395 | 5800 | 0.0035 | - | - |
| 9.6053 | 5840 | 0.0024 | - | - |
| 9.6711 | 5880 | 0.0027 | - | - |
| 9.7368 | 5920 | 0.002 | - | - |
| 9.8026 | 5960 | 0.0029 | - | - |
| 9.8684 | 6000 | 0.0018 | - | - |
| 9.9342 | 6040 | 0.0022 | - | - |
| 10.0 | 6080 | 0.0023 | 0.0268 | 0.7613 |
* The bold row denotes the saved checkpoint.
</details>
### Framework Versions
- Python: 3.11.9
- Sentence Transformers: 3.4.1
- Transformers: 4.48.3
- PyTorch: 2.3.0
- Accelerate: 1.1.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### ContrastiveLoss
```bibtex
@inproceedings{hadsell2006dimensionality,
author={Hadsell, R. and Chopra, S. and LeCun, Y.},
booktitle={2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)},
title={Dimensionality Reduction by Learning an Invariant Mapping},
year={2006},
volume={2},
number={},
pages={1735-1742},
doi={10.1109/CVPR.2006.100}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
fine-tuned/jina-embeddings-v2-base-en-21052024-5smg-webapp | fine-tuned | "2024-05-21T21:59:16Z" | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"Debate",
"Arguments",
"Counterarguments",
"Discussion",
"Opinions",
"custom_code",
"en",
"dataset:fine-tuned/jina-embeddings-v2-base-en-21052024-5smg-webapp",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2024-05-21T21:59:02Z" | ---
license: apache-2.0
datasets:
- fine-tuned/jina-embeddings-v2-base-en-21052024-5smg-webapp
- allenai/c4
language:
- en
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- Debate
- Arguments
- Counterarguments
- Discussion
- Opinions
---
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case:
debate search engine
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/jina-embeddings-v2-base-en-21052024-5smg-webapp',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
hkivancoral/hushem_5x_beit_base_rms_001_fold1 | hkivancoral | "2023-11-29T03:32:00Z" | 8 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-base-patch16-224",
"base_model:finetune:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-11-29T03:00:12Z" | ---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_5x_beit_base_rms_001_fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.4444444444444444
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_beit_base_rms_001_fold1
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2430
- Accuracy: 0.4444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5782 | 1.0 | 27 | 1.4061 | 0.2444 |
| 1.4004 | 2.0 | 54 | 1.4559 | 0.2444 |
| 1.3873 | 3.0 | 81 | 1.4120 | 0.2444 |
| 1.3666 | 4.0 | 108 | 1.6275 | 0.2444 |
| 1.3597 | 5.0 | 135 | 1.4398 | 0.2444 |
| 1.2814 | 6.0 | 162 | 1.5328 | 0.2444 |
| 1.2056 | 7.0 | 189 | 1.5389 | 0.2 |
| 1.1635 | 8.0 | 216 | 1.5332 | 0.2444 |
| 1.1235 | 9.0 | 243 | 1.6681 | 0.2444 |
| 1.1484 | 10.0 | 270 | 1.6176 | 0.2667 |
| 1.1757 | 11.0 | 297 | 1.6312 | 0.2444 |
| 1.1297 | 12.0 | 324 | 1.5067 | 0.2444 |
| 1.1448 | 13.0 | 351 | 1.5657 | 0.2444 |
| 1.1725 | 14.0 | 378 | 1.5184 | 0.1556 |
| 1.1591 | 15.0 | 405 | 1.5790 | 0.2444 |
| 1.1549 | 16.0 | 432 | 1.5501 | 0.2444 |
| 1.0865 | 17.0 | 459 | 1.5776 | 0.2444 |
| 1.1351 | 18.0 | 486 | 1.6195 | 0.3111 |
| 1.0974 | 19.0 | 513 | 1.5360 | 0.2444 |
| 1.0992 | 20.0 | 540 | 1.5742 | 0.3111 |
| 1.0894 | 21.0 | 567 | 1.4918 | 0.3778 |
| 1.0557 | 22.0 | 594 | 1.5742 | 0.2444 |
| 1.0574 | 23.0 | 621 | 1.5043 | 0.4222 |
| 1.0148 | 24.0 | 648 | 1.3535 | 0.4222 |
| 1.1133 | 25.0 | 675 | 1.4897 | 0.4 |
| 1.02 | 26.0 | 702 | 1.4554 | 0.4222 |
| 1.0107 | 27.0 | 729 | 1.4238 | 0.4 |
| 0.9307 | 28.0 | 756 | 1.7644 | 0.3556 |
| 0.8335 | 29.0 | 783 | 2.0253 | 0.3556 |
| 0.8203 | 30.0 | 810 | 1.7990 | 0.3556 |
| 0.7263 | 31.0 | 837 | 1.6909 | 0.3778 |
| 0.8387 | 32.0 | 864 | 1.4758 | 0.4 |
| 0.6837 | 33.0 | 891 | 2.1584 | 0.3556 |
| 0.7155 | 34.0 | 918 | 1.7102 | 0.3778 |
| 0.6349 | 35.0 | 945 | 1.1875 | 0.4667 |
| 0.6331 | 36.0 | 972 | 1.9965 | 0.4222 |
| 0.5871 | 37.0 | 999 | 1.7881 | 0.4 |
| 0.595 | 38.0 | 1026 | 1.7629 | 0.4 |
| 0.5266 | 39.0 | 1053 | 1.6720 | 0.4222 |
| 0.4985 | 40.0 | 1080 | 2.3229 | 0.4222 |
| 0.4855 | 41.0 | 1107 | 1.6470 | 0.4444 |
| 0.503 | 42.0 | 1134 | 1.7515 | 0.4667 |
| 0.4432 | 43.0 | 1161 | 2.0538 | 0.4222 |
| 0.3668 | 44.0 | 1188 | 2.1471 | 0.4444 |
| 0.3654 | 45.0 | 1215 | 2.0004 | 0.4444 |
| 0.3317 | 46.0 | 1242 | 2.1973 | 0.4444 |
| 0.2413 | 47.0 | 1269 | 2.2882 | 0.4444 |
| 0.2395 | 48.0 | 1296 | 2.2389 | 0.4444 |
| 0.2502 | 49.0 | 1323 | 2.2430 | 0.4444 |
| 0.237 | 50.0 | 1350 | 2.2430 | 0.4444 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
lazyiitian/llama-2-7b-ggml | lazyiitian | "2023-07-19T16:10:01Z" | 0 | 2 | null | [
"en",
"arxiv:1910.09700",
"license:apache-2.0",
"region:us"
] | null | "2023-07-19T11:48:00Z" | ---
license: apache-2.0
language:
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model card aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rty1234/fine-tuned-cv-model_final | rty1234 | "2024-09-11T18:08:28Z" | 105 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-09-11T18:07:14Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Chandans01/sjcemcabvid | Chandans01 | "2024-09-03T09:27:43Z" | 8 | 0 | null | [
"blip-2",
"vision",
"image-to-text",
"image-captioning",
"visual-question-answering",
"en",
"arxiv:2301.12597",
"license:mit",
"region:us"
] | image-to-text | "2024-09-03T07:12:44Z" | ---
language: en
license: mit
tags:
- vision
- image-to-text
- image-captioning
- visual-question-answering
pipeline_tag: image-to-text
---
# BLIP-2, OPT-2.7b, pre-trained only
BLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters).
It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2).
Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model.
The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen
while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings,
which bridge the gap between the embedding space of the image encoder and the large language model.
The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg"
alt="drawing" width="600"/>
This allows the model to be used for tasks like:
- image captioning
- visual question answering (VQA)
- chat-like conversations by feeding the image and the previous conversation as prompt to the model
## Direct Use and Downstream Use
You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for
fine-tuned versions on a task that interests you.
## Bias, Risks, Limitations, and Ethical Considerations
BLIP2-OPT uses off-the-shelf OPT as the language model. It inherits the same risks and limitations as mentioned in Meta's model card.
> Like other large language models for which the diversity (or lack thereof) of training
> data induces downstream impact on the quality of our model, OPT-175B has limitations in terms
> of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and
> hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern
> large language models.
>
BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data.
BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within.
### How to use
For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example).
### Memory requirements
The memory requirements differ based on the precision one uses. One can use 4-bit inference using [Bitsandbytes](https://huggingface.co/blog/4bit-transformers-bitsandbytes), which greatly reduce the memory requirements.
| dtype | Largest Layer or Residual Group | Total Size | Training using Adam |
|-------------------|---------------------------------|------------|----------------------|
| float32 | 490.94 MB | 14.43 GB | 57.72 GB |
| float16/bfloat16 | 245.47 MB | 7.21 GB | 28.86 GB |
| int8 | 122.73 MB | 3.61 GB | 14.43 GB |
| int4 | 61.37 MB | 1.8 GB | 7.21 GB |
#### Running the model on CPU
<details>
<summary> Click to expand </summary>
```python
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True).strip())
```
</details>
#### Running the model on GPU
##### In full precision
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt").to("cuda")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True).strip())
```
</details>
##### In half precision (`float16`)
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import torch
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16, device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True).strip())
```
</details>
##### In 8-bit precision (`int8`)
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate bitsandbytes
import torch
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", load_in_8bit=True, device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True).strip())
```
</details> |
lucasrct/sentiment_prompt_2_model | lucasrct | "2025-01-17T19:22:01Z" | 57 | 0 | null | [
"safetensors",
"gpt_neox",
"region:us"
] | null | "2025-01-17T16:45:59Z" | # sentiment_prompt_2_model
## Model Description
This model is a fine-tuned version of `EleutherAI/pythia-410m` trained on `dair-ai/emotion` data.
## Dataset Details
- Dataset Configuration: unsplit
- Dataset Name: dair-ai/emotion
- Prompt: text: {text}
sentiment: {label}
## Training Details
- Base Model: EleutherAI/pythia-410m
- Training Parameters:
- Learning Rate: 2e-05
- Batch Size: 1
- Epochs: 1
|
TheBloke/Llama-2-70B-AWQ | TheBloke | "2023-11-09T18:21:10Z" | 443 | 14 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-2",
"en",
"arxiv:2307.09288",
"base_model:meta-llama/Llama-2-70b-hf",
"base_model:quantized:meta-llama/Llama-2-70b-hf",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] | text-generation | "2023-09-19T00:05:44Z" | ---
language:
- en
license: llama2
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
model_name: Llama 2 70B
base_model: meta-llama/Llama-2-70b-hf
inference: false
model_creator: Meta Llama 2
model_type: llama
pipeline_tag: text-generation
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama 2 70B - AWQ
- Model creator: [Meta Llama 2](https://huggingface.co/meta-llama)
- Original model: [Llama 2 70B](https://huggingface.co/meta-llama/Llama-2-70b-hf)
<!-- description start -->
## Description
This repo contains AWQ model files for [Meta Llama 2's Llama 2 70B](https://huggingface.co/meta-llama/Llama-2-70b-hf).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference.
It is also now supported by continuous batching server [vLLM](https://github.com/vllm-project/vllm), allowing use of AWQ models for high-throughput concurrent inference in multi-user server scenarios. Note that, at the time of writing, overall throughput is still lower than running vLLM with unquantised models, however using AWQ enables using much smaller GPUs which can lead to easier deployment and overall cost savings. For example, a 70B model can be run on 1 x 48GB GPU instead of 2 x 80GB.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Llama-2-70B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-70B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-70B-GGUF)
* [Meta Llama 2's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/meta-llama/Llama-2-70b-hf)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: None
```
{prompt}
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files and AWQ parameters
For my first release of AWQ models, I am releasing 128g models only. I will consider adding 32g as well if there is interest, and once I have done perplexity and evaluation comparisons, but at this time 32g models are still not fully tested with AutoAWQ and vLLM.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Llama-2-70B-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 36.61 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Serving this model from vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- When using vLLM as a server, pass the `--quantization awq` parameter, for example:
```shell
python3 python -m vllm.entrypoints.api_server --model TheBloke/Llama-2-70B-AWQ --quantization awq
```
When using vLLM from Python code, pass the `quantization=awq` parameter, for example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Hello, my name is",
"The president of the United States is",
"The capital of France is",
"The future of AI is",
]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/Llama-2-70B-AWQ", quantization="awq")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-python start -->
## How to use this AWQ model from Python code
### Install the necessary packages
Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.0.2 or later
```shell
pip3 install autoawq
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### You can then try the following example code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
model_name_or_path = "TheBloke/Llama-2-70B-AWQ"
# Load model
model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True,
trust_remote_code=False, safetensors=True)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False)
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
print("\n\n*** Generate:")
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
# Generate output
generation_output = model.generate(
tokens,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
max_new_tokens=512
)
print("Output: ", tokenizer.decode(generation_output[0]))
# Inference can also be done using transformers' pipeline
from transformers import pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with [AutoAWQ](https://github.com/casper-hansen/AutoAWQ), and [vLLM](https://github.com/vllm-project/vllm).
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is not yet compatible with AWQ, but a PR is open which should bring support soon: [TGI PR #781](https://github.com/huggingface/text-generation-inference/issues/781).
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Meta Llama 2's Llama 2 70B
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 70B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
|
YakovElm/Hyperledger5Classic_512 | YakovElm | "2023-05-27T19:38:29Z" | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-05-27T19:37:53Z" | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Hyperledger5Classic_512
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Hyperledger5Classic_512
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3034
- Train Accuracy: 0.8744
- Validation Loss: 0.4265
- Validation Accuracy: 0.8185
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.4068 | 0.8537 | 0.4270 | 0.8361 | 0 |
| 0.3760 | 0.8537 | 0.4053 | 0.8361 | 1 |
| 0.3034 | 0.8744 | 0.4265 | 0.8185 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
jervanAMG/llama2-qlora-finetunined-french | jervanAMG | "2023-08-15T22:20:55Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-08-15T22:20:34Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
junseojang/new-KU-LG-30K-comb1-epoch_1 | junseojang | "2025-03-24T01:43:31Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-24T01:18:02Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DevarshRaj/neww_model_mistral_token | DevarshRaj | "2024-03-11T12:37:36Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-03-11T12:23:02Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jaspionjader/fr-8-8b | jaspionjader | "2025-01-05T07:59:18Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:jaspionjader/fct-14-8b",
"base_model:merge:jaspionjader/fct-14-8b",
"base_model:jaspionjader/fr-7-8b",
"base_model:merge:jaspionjader/fr-7-8b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-05T07:54:12Z" | ---
base_model:
- jaspionjader/fct-14-8b
- jaspionjader/fr-7-8b
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [jaspionjader/fct-14-8b](https://huggingface.co/jaspionjader/fct-14-8b)
* [jaspionjader/fr-7-8b](https://huggingface.co/jaspionjader/fr-7-8b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: jaspionjader/fr-7-8b
layer_range:
- 0
- 32
- model: jaspionjader/fct-14-8b
layer_range:
- 0
- 32
merge_method: slerp
base_model: jaspionjader/fr-7-8b
parameters:
t:
- filter: self_attn
value:
- 0.07
- 0.07
- 0.07
- 0.07
- 0.07
- filter: mlp
value:
- 0.07
- 0.07
- 0.07
- 0.07
- 0.07
- value: 0.07
dtype: bfloat16
```
|
great0001/9e6f97dc-4b1b-423c-965d-3fc1112dc248 | great0001 | "2025-01-26T19:43:16Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NovaSearch/stella_en_1.5B_v5",
"base_model:adapter:NovaSearch/stella_en_1.5B_v5",
"license:mit",
"region:us"
] | null | "2025-01-26T19:24:11Z" | ---
library_name: peft
license: mit
base_model: dunzhang/stella_en_1.5B_v5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9e6f97dc-4b1b-423c-965d-3fc1112dc248
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: dunzhang/stella_en_1.5B_v5
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5c45504c39856700_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5c45504c39856700_train_data.json
type:
field_instruction: title
field_output: context
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: great0001/9e6f97dc-4b1b-423c-965d-3fc1112dc248
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/5c45504c39856700_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a23cb1b6-2311-41d1-8e3f-18e935e6a94b
wandb_project: Birthday-SN56-14-Gradients-On-Demand
wandb_run: your_name
wandb_runid: a23cb1b6-2311-41d1-8e3f-18e935e6a94b
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 9e6f97dc-4b1b-423c-965d-3fc1112dc248
This model is a fine-tuned version of [dunzhang/stella_en_1.5B_v5](https://huggingface.co/dunzhang/stella_en_1.5B_v5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0000 | 1 | nan |
| 0.0 | 0.0001 | 3 | nan |
| 0.0 | 0.0002 | 6 | nan |
| 0.0 | 0.0003 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
yuliu1234/Meta-Llama-3-8B-Instruct-Q8_0-GGUF | yuliu1234 | "2024-05-31T02:34:10Z" | 2 | 0 | null | [
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:quantized:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-05-31T02:33:43Z" | ---
language:
- en
license: llama3
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- llama-cpp
- gguf-my-repo
base_model: meta-llama/Meta-Llama-3-8B-Instruct
pipeline_tag: text-generation
extra_gated_prompt: "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version\
\ Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for\
\ use, reproduction, distribution and modification of the Llama Materials set forth\
\ herein.\n\"Documentation\" means the specifications, manuals and documentation\
\ accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\
\"Licensee\" or \"you\" means you, or your employer or any other person or entity\
\ (if you are entering into this Agreement on such person or entity’s behalf), of\
\ the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama\
\ 3\" means the foundational large language models and software and algorithms,\
\ including machine-learning model code, trained model weights, inference-enabling\
\ code, training-enabling code, fine-tuning enabling code and other elements of\
\ the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\
\"Llama Materials\" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation\
\ (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"\
we\" means Meta Platforms Ireland Limited (if you are located in or, if you are\
\ an entity, your principal place of business is in the EEA or Switzerland) and\
\ Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n\
\ \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted\
\ a non-exclusive, worldwide, non-transferable and royalty-free limited license\
\ under Meta’s intellectual property or other rights owned by Meta embodied in the\
\ Llama Materials to use, reproduce, distribute, copy, create derivative works of,\
\ and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni.\
\ If you distribute or make available the Llama Materials (or any derivative works\
\ thereof), or a product or service that uses any of them, including another AI\
\ model, you shall (A) provide a copy of this Agreement with any such Llama Materials;\
\ and (B) prominently display “Built with Meta Llama 3” on a related website, user\
\ interface, blogpost, about page, or product documentation. If you use the Llama\
\ Materials to create, train, fine tune, or otherwise improve an AI model, which\
\ is distributed or made available, you shall also include “Llama 3” at the beginning\
\ of any such AI model name.\nii. If you receive Llama Materials, or any derivative\
\ works thereof, from a Licensee as part of an integrated end user product, then\
\ Section 2 of this Agreement will not apply to you.\niii. You must retain in all\
\ copies of the Llama Materials that you distribute the following attribution notice\
\ within a “Notice” text file distributed as a part of such copies: “Meta Llama\
\ 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms,\
\ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\
\ applicable laws and regulations (including trade compliance laws and regulations)\
\ and adhere to the Acceptable Use Policy for the Llama Materials (available at\
\ https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference\
\ into this Agreement.\nv. You will not use the Llama Materials or any output or\
\ results of the Llama Materials to improve any other large language model (excluding\
\ Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If,\
\ on the Meta Llama 3 version release date, the monthly active users of the products\
\ or services made available by or for Licensee, or Licensee’s affiliates, is greater\
\ than 700 million monthly active users in the preceding calendar month, you must\
\ request a license from Meta, which Meta may grant to you in its sole discretion,\
\ and you are not authorized to exercise any of the rights under this Agreement\
\ unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer\
\ of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT\
\ AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF\
\ ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED,\
\ INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY,\
\ OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING\
\ THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME\
\ ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n\
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER\
\ ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY,\
\ OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT,\
\ SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META\
\ OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n\
5. Intellectual Property.\na. No trademark licenses are granted under this Agreement,\
\ and in connection with the Llama Materials, neither Meta nor Licensee may use\
\ any name or mark owned by or associated with the other or any of its affiliates,\
\ except as required for reasonable and customary use in describing and redistributing\
\ the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you\
\ a license to use “Llama 3” (the “Mark”) solely as required to comply with the\
\ last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently\
\ accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All\
\ goodwill arising out of your use of the Mark will inure to the benefit of Meta.\n\
b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for\
\ Meta, with respect to any derivative works and modifications of the Llama Materials\
\ that are made by you, as between you and Meta, you are and will be the owner of\
\ such derivative works and modifications.\nc. If you institute litigation or other\
\ proceedings against Meta or any entity (including a cross-claim or counterclaim\
\ in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results,\
\ or any portion of any of the foregoing, constitutes infringement of intellectual\
\ property or other rights owned or licensable by you, then any licenses granted\
\ to you under this Agreement shall terminate as of the date such litigation or\
\ claim is filed or instituted. You will indemnify and hold harmless Meta from and\
\ against any claim by any third party arising out of or related to your use or\
\ distribution of the Llama Materials.\n6. Term and Termination. The term of this\
\ Agreement will commence upon your acceptance of this Agreement or access to the\
\ Llama Materials and will continue in full force and effect until terminated in\
\ accordance with the terms and conditions herein. Meta may terminate this Agreement\
\ if you are in breach of any term or condition of this Agreement. Upon termination\
\ of this Agreement, you shall delete and cease use of the Llama Materials. Sections\
\ 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law\
\ and Jurisdiction. This Agreement will be governed and construed under the laws\
\ of the State of California without regard to choice of law principles, and the\
\ UN Convention on Contracts for the International Sale of Goods does not apply\
\ to this Agreement. The courts of California shall have exclusive jurisdiction\
\ of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use\
\ Policy\nMeta is committed to promoting safe and fair use of its tools and features,\
\ including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable\
\ Use Policy (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n\
#### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly.\
\ You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate\
\ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\
\ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
\ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
\ or harm to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
\ illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6. Any\
\ other criminal activity\n 2. Engage in, promote, incite, or facilitate the\
\ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 3. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n 4.\
\ Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices\n 5. Collect, process, disclose, generate, or infer health, demographic,\
\ or other sensitive personal or private information about individuals without rights\
\ and consents required by applicable laws\n 6. Engage in or facilitate any action\
\ or generate any content that infringes, misappropriates, or otherwise violates\
\ any third-party rights, including the outputs or results of any products or services\
\ using the Llama Materials\n 7. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system\n2. Engage in, promote, incite,\
\ facilitate, or assist in the planning or development of activities that present\
\ a risk of death or bodily harm to individuals, including use of Meta Llama 3 related\
\ to the following:\n 1. Military, warfare, nuclear industries or applications,\
\ espionage, use for materials or activities that are subject to the International\
\ Traffic Arms Regulations (ITAR) maintained by the United States Department of\
\ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\
\ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\
\ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\
\ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\
\ content intended to incite or promote violence, abuse, or any infliction of bodily\
\ harm to an individual\n3. Intentionally deceive or mislead others, including use\
\ of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering\
\ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\
\ or furthering defamatory content, including the creation of defamatory statements,\
\ images, or other content\n 3. Generating, promoting, or further distributing\
\ spam\n 4. Impersonating another individual without consent, authorization,\
\ or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are\
\ human-generated\n 6. Generating or facilitating false online engagement, including\
\ fake reviews and other means of fake online engagement\n4. Fail to appropriately\
\ disclose to end users any known dangers of your AI system\nPlease report any violation\
\ of this Policy, software “bug,” or other problems that could lead to a violation\
\ of this Policy through one of the following means:\n * Reporting issues with\
\ the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n\
\ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n\
\ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting\
\ violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]"
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
widget:
- example_title: Hello
messages:
- role: user
content: Hey my name is Julien! How are you?
- example_title: Winter holidays
messages:
- role: system
content: You are a helpful and honest assistant. Please, respond concisely and
truthfully.
- role: user
content: Can you recommend a good destination for Winter holidays?
- example_title: Programming assistant
messages:
- role: system
content: You are a helpful and honest code and programming assistant. Please,
respond concisely and truthfully.
- role: user
content: Write a function that computes the nth fibonacci number.
inference:
parameters:
max_new_tokens: 300
stop:
- <|end_of_text|>
- <|eot_id|>
---
# yuliu1234/Meta-Llama-3-8B-Instruct-Q8_0-GGUF
This model was converted to GGUF format from [`meta-llama/Meta-Llama-3-8B-Instruct`](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama --hf-repo yuliu1234/Meta-Llama-3-8B-Instruct-Q8_0-GGUF --hf-file meta-llama-3-8b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo yuliu1234/Meta-Llama-3-8B-Instruct-Q8_0-GGUF --hf-file meta-llama-3-8b-instruct-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./main --hf-repo yuliu1234/Meta-Llama-3-8B-Instruct-Q8_0-GGUF --hf-file meta-llama-3-8b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./server --hf-repo yuliu1234/Meta-Llama-3-8B-Instruct-Q8_0-GGUF --hf-file meta-llama-3-8b-instruct-q8_0.gguf -c 2048
```
|
jonatatyska/Qwen2.5-1.5B-Open-R1-Distill | jonatatyska | "2025-03-19T16:55:29Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:jonatatyska/cartpole_sft_reasoning_all",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-19T16:00:54Z" | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
datasets: jonatatyska/cartpole_sft_reasoning_all
library_name: transformers
model_name: Qwen2.5-1.5B-Open-R1-Distill
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for Qwen2.5-1.5B-Open-R1-Distill
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the [jonatatyska/cartpole_sft_reasoning_all](https://huggingface.co/datasets/jonatatyska/cartpole_sft_reasoning_all) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jonatatyska/Qwen2.5-1.5B-Open-R1-Distill", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/ine-ufsc/huggingface/runs/lih3zr3f)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
renyulin/my-new-shiny-tokenizer | renyulin | "2025-03-08T15:31:56Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-03-08T15:31:51Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
OpenVINO/starcoder2-7b-fp16-ov | OpenVINO | "2024-11-05T09:42:45Z" | 9 | 0 | transformers | [
"transformers",
"openvino",
"starcoder2",
"text-generation",
"base_model:bigcode/starcoder2-7b",
"base_model:finetune:bigcode/starcoder2-7b",
"license:bigcode-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-11T06:42:04Z" | ---
license: bigcode-openrail-m
base_model:
- bigcode/starcoder2-7b
---
# starcoder2-7b-fp16-ov
* Model creator: [BigCode](https://huggingface.co/bigcode)
* Original model: [bigcode/starcoder2-7b](https://huggingface.co/bigcode/starcoder2-7b)
## Description
This is [bigcode/starcoder2-7b](https://huggingface.co/bigcode/starcoder2-7b) model converted to the [OpenVINO™ IR](https://docs.openvino.ai/2024/documentation/openvino-ir-format.html) (Intermediate Representation) format.
## Compatibility
The provided OpenVINO™ IR model is compatible with:
* OpenVINO version 2024.2.0 and higher
* Optimum Intel 1.17.0 and higher
## Running Model Inference with [Optimum Intel](https://huggingface.co/docs/optimum/intel/index)
1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend:
```
pip install optimum[openvino]
```
2. Run model inference:
```
from transformers import AutoTokenizer
from optimum.intel.openvino import OVModelForCausalLM
model_id = "OpenVINO/starcoder2-7b-fp16-ov"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = OVModelForCausalLM.from_pretrained(model_id)
inputs = tokenizer("def print_hello_world():", return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
For more examples and possible optimizations, refer to the [OpenVINO Large Language Model Inference Guide](https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide.html).
## Running Model Inference with [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai)
1. Install packages required for using OpenVINO GenAI.
```
pip install openvino-genai huggingface_hub
```
2. Download model from HuggingFace Hub
```
import huggingface_hub as hf_hub
model_id = "OpenVINO/starcoder2-7b-fp16-ov"
model_path = "starcoder2-7b-fp16-ov"
hf_hub.snapshot_download(model_id, local_dir=model_path)
```
3. Run model inference:
```
import openvino_genai as ov_genai
device = "CPU"
pipe = ov_genai.LLMPipeline(model_path, device)
print(pipe.generate("def print_hello_world():", max_length=200))
```
More GenAI usage examples can be found in OpenVINO GenAI library [docs](https://github.com/openvinotoolkit/openvino.genai/blob/master/src/README.md) and [samples](https://github.com/openvinotoolkit/openvino.genai?tab=readme-ov-file#openvino-genai-samples)
## Legal information
The original model is distributed under [bigcode-openrail-m](https://www.bigcode-project.org/docs/pages/bigcode-openrail/) license. More details can be found in [bigcode/starcoder2-7b](https://huggingface.co/bigcode/starcoder2-7b).
## Disclaimer
Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See [Intel’s Global Human Rights Principles](https://www.intel.com/content/dam/www/central-libraries/us/en/documents/policy-human-rights.pdf). Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights. |
mateiaassAI/MT5_MEID3_300_3 | mateiaassAI | "2024-09-05T12:31:55Z" | 103 | 0 | transformers | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-09-05T12:29:55Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RayneAmes/creed_v2 | RayneAmes | "2025-02-09T21:09:43Z" | 14 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-02-09T21:07:17Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DrR0b0t/ppo-LunarLander-v2 | DrR0b0t | "2023-07-13T14:59:19Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-07-10T16:33:37Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 253.32 +/- 14.58
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Subsets and Splits