modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-16 06:27:39
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 427
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-16 06:26:18
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
zhichao158/wav2vec2-xls-r-common_voice-tr-ft | zhichao158 | "2022-01-14T07:03:32Z" | 5 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"tr",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-03-02T23:29:05Z" | ---
language:
- tr
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-common_voice-tr-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-common_voice-tr-ft
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3736
- Wer: 0.2930
- Cer: 0.0708
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 96
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 0.5462 | 13.51 | 500 | 0.4423 | 0.4807 | 0.1188 |
| 0.342 | 27.03 | 1000 | 0.3781 | 0.3954 | 0.0967 |
| 0.2272 | 40.54 | 1500 | 0.3816 | 0.3595 | 0.0893 |
| 0.1805 | 54.05 | 2000 | 0.3943 | 0.3487 | 0.0854 |
| 0.1318 | 67.57 | 2500 | 0.3818 | 0.3262 | 0.0801 |
| 0.1213 | 81.08 | 3000 | 0.3777 | 0.3113 | 0.0758 |
| 0.0639 | 94.59 | 3500 | 0.3788 | 0.2953 | 0.0716 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.8.0
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Ayushi26/t5-legal-summary | Ayushi26 | "2025-03-19T06:24:42Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-03-11T13:32:10Z" | ---
library_name: transformers
license: apache-2.0
base_model: google-t5/t5-small
tags:
- generated_from_trainer
model-index:
- name: t5-legal-summary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-legal-summary
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7749
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6378 | 1.0 | 12 | 1.6668 |
| 1.939 | 2.0 | 24 | 1.4088 |
| 1.5974 | 3.0 | 36 | 1.2544 |
| 1.4909 | 4.0 | 48 | 1.1440 |
| 1.2012 | 5.0 | 60 | 1.0653 |
| 1.1827 | 6.0 | 72 | 1.0084 |
| 1.0929 | 7.0 | 84 | 0.9612 |
| 1.0614 | 8.0 | 96 | 0.9168 |
| 1.0783 | 9.0 | 108 | 0.8833 |
| 0.9964 | 10.0 | 120 | 0.8573 |
| 0.9311 | 11.0 | 132 | 0.8384 |
| 1.014 | 12.0 | 144 | 0.8233 |
| 0.872 | 13.0 | 156 | 0.8103 |
| 0.8249 | 14.0 | 168 | 0.8008 |
| 0.8789 | 15.0 | 180 | 0.7915 |
| 0.8135 | 16.0 | 192 | 0.7848 |
| 0.849 | 17.0 | 204 | 0.7803 |
| 0.8621 | 18.0 | 216 | 0.7773 |
| 0.836 | 19.0 | 228 | 0.7755 |
| 0.7608 | 20.0 | 240 | 0.7749 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
WiroAI/wiroai-turkish-llm-9b | WiroAI | "2025-01-31T02:00:21Z" | 3,192 | 19 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"tr",
"base_model:google/gemma-2-9b",
"base_model:finetune:google/gemma-2-9b",
"license:gemma",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-22T13:49:15Z" | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
tags:
- conversational
base_model:
- google/gemma-2-9b
language:
- tr
model-index:
- name: wiroai-turkish-llm-9b
results:
- task:
type: multiple-choice
dataset:
type: multiple-choice
name: MMLU_TR_V0.2
metrics:
- name: 5-shot
type: 5-shot
value: 0.5982
verified: false
- task:
type: multiple-choice
dataset:
type: multiple-choice
name: Truthful_QA_V0.2
metrics:
- name: 0-shot
type: 0-shot
value: 0.4991
verified: false
- task:
type: multiple-choice
dataset:
type: multiple-choice
name: ARC_TR_V0.2
metrics:
- name: 25-shot
type: 25-shot
value: 0.5367
verified: false
- task:
type: multiple-choice
dataset:
type: multiple-choice
name: HellaSwag_TR_V0.2
metrics:
- name: 10-shot
type: 10-shot
value: 0.5701
verified: false
- task:
type: multiple-choice
dataset:
type: multiple-choice
name: GSM8K_TR_V0.2
metrics:
- name: 5-shot
type: 5-shot
value: 0.6682
verified: false
- task:
type: multiple-choice
dataset:
type: multiple-choice
name: Winogrande_TR_V0.2
metrics:
- name: 5-shot
type: 5-shot
value: 0.6058
verified: false
---
<div align="center">
<img src="https://huggingface.co/WiroAI/wiroai-turkish-llm-9b/resolve/main/wiro_logo.png" width="15%" alt="Wiro AI" />
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://www.wiro.ai/" target="_blank" style="margin: 2px;">
<img alt="Homepage" src="https://huggingface.co/WiroAI/wiroai-turkish-llm-9b/resolve/main/homepage.svg" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://wiro.ai/tools?search=&categories=chat&tags=&page=0" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://huggingface.co/WiroAI/wiroai-turkish-llm-9b/resolve/main/chat.svg" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/WiroAI" target="_blank" style="margin: 2px;">
<img alt="Hugging Face" src="https://huggingface.co/WiroAI/wiroai-turkish-llm-9b/resolve/main/huggingface.svg" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://instagram.com/wiroai" target="_blank" style="margin: 2px;">
<img alt="Instagram Follow" src="https://img.shields.io/badge/Instagram-wiroai-555555?logo=instagram&logoColor=white&labelColor=E4405F" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://x.com/wiroai" target="_blank" style="margin: 2px;">
<img alt="X Follow" src="https://img.shields.io/badge/X-wiroai-555555?logo=x&logoColor=white&labelColor=000000" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://wiro.ai/agreement/terms-of-service" style="margin: 2px;">
<img alt="License" src="https://img.shields.io/badge/License-gemma-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
# 🚀 Meet with WiroAI/wiroai-turkish-llm-9b! A robust language model with more Turkish language and culture support! 🚀
## 🌟 Key Features
- Fine-tuned with 500,000+ high-quality Turkish instructions
- LoRA method was used for fine-tuning without quantization.
- Adapted to Turkish culture and local context
- Built on Google's cutting-edge Gemma architecture
📝 Model Details
The model is the Turkish-speaking member of Google's innovative Gemma model family. This model has been trained using Supervised Fine-Tuning (SFT) on carefully curated high-quality Turkish instructions. This model demonstrates superior performance in Turkish language processing tasks.
## 🔧 Technical Specifications
- Architecture: Decoder-only transformer
- Base Model: Google Gemma 2 9B
- Training Data: 500,000+ specially selected Turkish instructions
- Language Support: Turkish (with comprehensive local context understanding) and other common languages.
## 💡 Use Cases
- Text Generation and Editing
- Question Answering
- Summarization
- Analysis and Reasoning
- Content Transformation
- Turkish Natural Language Processing Tasks
- Turkish Culture
## 🚀 Advantages
Local Understanding: Ability to comprehend Turkish culture, idioms, and current events
Resource Efficiency: Effective operation even with limited hardware resources
Flexible Deployment: Usable on desktop, laptop, or custom cloud infrastructure
Open Model: Transparent and customizable architecture
## 🌍 About Google Gemma 2
Gemma is Google's family of lightweight, state-of-the-art open models, developed using the same research and technology used to create the Gemini models. These models are designed to be deployable in environments with limited resources, making AI technology accessible to everyone.
## 📈 Performance and Limitations
While the model demonstrates high performance in Turkish language tasks, users should consider the following:
- Use clear and structured instructions for best results.
- Verify model outputs for critical applications.
- Evaluate resource requirements before deployment.
- Be aware that benchmarks below are represented in certain conditions and results can be replicated. Condition choices are explained below the table.
### Benchmark Scores
| Models | MMLU TR | TruthfulQA TR | ARC TR | HellaSwag TR | GSM8K TR | WinoGrande TR | Average |
|-----------------------------------------------------------|:-------:|:-------------:|:------:|:------------:|:--------:|:-------------:|:-------:|
| **WiroAI/wiroai-turkish-llm-9b** | **59.8** | 49.9 | **53.7** | **57.0** | 66.8 | **60.6** | **58.0** |
| selimc/OrpoGemma-2-9B-TR | 53.0 | 54.3 | 52.4 | 52.0 | 64.8 | 58.9 | 55.9 |
| Metin/Gemma-2-9b-it-TR-DPO-V1 | 51.3 | 54.7 | 52.6 | 51.2 | 67.1 | 55.2 | 55.4 |
| CohereForAI/aya-expanse-8b | 52.3 | 52.8 | 49.3 | 56.7 | 61.3 | 59.2 | 55.3 |
| ytu-ce-cosmos/Turkish-Llama-8b-DPO-v0.1 | 52.0 | 57.6 | 51.0 | 53.0 | 59.8 | 58.0 | 55.2 |
| google/gemma-2-9b-it | 51.8 | 53.0 | 52.2 | 51.5 | 63.0 | 56.2 | 54.6 |
| Eurdem/Defne-llama3.1-8B | 52.9 | 51.2 | 47.1 | 51.6 | 59.9 | 57.5 | 53.4 |
| **WiroAI/wiroai-turkish-llm-8b** | 52.4 | 49.5 | 50.1 | 54 | 57.5 | 57.0 | 53.4 |
| meta-llama/Meta-Llama-3-8B-Instruct | 52.2 | 49.2 | 44.2 | 49.2 | 56.0 | 56.7 | 51.3 |
Models Benchmarks are tested with
```python
lm_eval --model_args pretrained=<model_path> --tasks mmlu_tr_v0.2,arc_tr-v0.2,gsm8k_tr-v0.2,hellaswag_tr-v0.2,truthfulqa_v0.2,winogrande_tr-v0.2
```
Please see https://github.com/malhajar17/lm-evaluation-harness_turkish and note that we move forward with default language inference which is the same approach in OpenLLMLeaderboard v2.0
## Usage
### Transformers Pipeline
```python
import transformers
import torch
model_id = "WiroAI/wiroai-turkish-llm-9b"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
pipeline.model.eval()
instruction = "Bana İstanbul ile alakalı bir sosyal medya postu hazırlar mısın?"
messages = [
{"role": "user", "content": f"{instruction}"}
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<end_of_turn>")
]
outputs = pipeline(
prompt,
max_new_tokens=512,
eos_token_id=terminators,
do_sample=True,
temperature=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
```markdown
İstanbul'un büyüsüne kapılın! :city_sunset:
Halk arasında "dünyanın masalı şehri" olarak bilinen İstanbul, her köşesinde tarih, kültür ve modern yaşamın bir araya geldiği eşsiz bir şehir.
Yüzyıllardır farklı medeniyetlerin izlerini taşıyan İstanbul, tarihi mekanlarından, müzelerinden, çarşılarından ve restoranlarından oluşan zengin kültürel mirasa sahiptir.
Boğaz'ın eşsiz manzarasında tekne turu yapmak, Topkapı Sarayı'nı ziyaret etmek, Grand Bazaar'da alışveriş yapmak, Mısır Çarşısı'nın canlı atmosferinde kaybolmak, Galata Kulesi'nden muhteşem bir manzara deneyimlemek veya Beyoğlu'nun hareketli sokaklarında yürüyüş yapmak İstanbul'da unutulmaz anılar yaratmak için fırsatlar sunar.
İstanbul'un büyülü atmosferini kendiniz yaşamak için hemen planınızı yapın! :flag-tr: #İstanbul #Türkiye #Seyahat #Tarih #Kültür #Gezi
```
## 🤝 License and Usage
This model is provided under Google's Gemma license. Please review and accept the license terms before use.
## 📫 Contact and Support
For questions, suggestions, and feedback, please open an issue on HuggingFace or contact us directly from our website.
## Citation
```none
@article{WiroAI,
title={WiroAI/wiroai-turkish-llm-9b},
author={Abdullah Bezir, Furkan Burhan Türkay, Cengiz Asmazoğlu},
year={2024},
url={https://huggingface.co/WiroAI/wiroai-turkish-llm-9b}
}
```
```none
@article{gemma_2024,
title={Gemma},
url={https://www.kaggle.com/m/3301},
DOI={10.34740/KAGGLE/M/3301},
publisher={Kaggle},
author={Gemma Team},
year={2024}
}
``` |
rbehzadan/bge-large-en-v1.5-ggml-f16 | rbehzadan | "2024-06-05T02:34:33Z" | 17 | 0 | null | [
"gguf",
"license:mit",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | "2024-06-05T02:03:10Z" | ---
license: mit
---
# bge-large-en-v1.5-GGUF for llama.cpp
This repository contains a converted version of the BAAI/bge-large-en-v1.5 model for text embeddings, specifically prepared for use with the `llama.cpp` or Python `llama-cpp-python` library.
**Original Model:** [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5)
**Conversion Details:**
* The conversion was performed using `llama.cpp's convert-hf-to-gguf.py` script.
* This conversion optimizes the model for the `llama.cpp`.
**Usage:**
This model can be loaded and used for text embedding tasks using the `llama-cpp-python` library. Here's an example:
```python
from llama import Model
# Load the converted model
model = Model.load("rbehzadan/bge-large-en-v1.5-ggml-f16")
# Encode some text
text = "This is a sample sentence."
encoded_text = model.embed(text)
```
**Important Notes:**
* This converted model might have slight performance variations compared to the original model due to the conversion process.
* Ensure you have the `llama-cpp-python` library installed for this model to function.
**License:**
The license for this model is inherited from the original BAAI/bge-large-en-v1.5 model (refer to the original model's repository for details).
**Contact:**
Feel free to create an issue in this repository for any questions or feedback. |
RichardErkhov/cmncomp_-_coldint_0694-4bits | RichardErkhov | "2025-02-04T18:24:41Z" | 5 | 0 | null | [
"safetensors",
"phi3",
"custom_code",
"arxiv:1910.09700",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2025-02-04T18:22:48Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
coldint_0694 - bnb 4bits
- Model creator: https://huggingface.co/cmncomp/
- Original model: https://huggingface.co/cmncomp/coldint_0694/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dogukankartal/sd-class-butterflies-32 | dogukankartal | "2024-08-04T21:16:44Z" | 44 | 1 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | "2024-08-04T21:16:31Z" | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('dogukankartal/sd-class-butterflies-32')
image = pipeline().images[0]
image
|
Triangle104/Arch-Function-Chat-7B-Q6_K-GGUF | Triangle104 | "2025-04-05T23:52:10Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:katanemo/Arch-Function-Chat-7B",
"base_model:quantized:katanemo/Arch-Function-Chat-7B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2025-04-05T23:50:00Z" | ---
base_model: katanemo/Arch-Function-Chat-7B
language:
- en
library_name: transformers
license: other
license_name: katanemo-research
license_link: https://huggingface.co/katanemo/Arch-Function-Chat-7B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/Arch-Function-Chat-7B-Q6_K-GGUF
This model was converted to GGUF format from [`katanemo/Arch-Function-Chat-7B`](https://huggingface.co/katanemo/Arch-Function-Chat-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/katanemo/Arch-Function-Chat-7B) for more details on the model.
---
The Arch-Function-Chat collection builds upon the Katanemo's Arch-Function
collection by extending its capabilities beyond function calling. This
new collection maintains the state-of-the-art(SOTA) function calling
performance of the original collection while adding powerful new
features that make it even more versatile in real-world applications.
In addition to function calling capabilities, this collection now offers:
-Clarify & refine: Generates natural follow-up questions to collect missing information for function calling
-Interpret & respond: Provides human-friendly responses based on function execution results
-Context management: Mantains context in complex multi-turn interactions
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Arch-Function-Chat-7B-Q6_K-GGUF --hf-file arch-function-chat-7b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Arch-Function-Chat-7B-Q6_K-GGUF --hf-file arch-function-chat-7b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Arch-Function-Chat-7B-Q6_K-GGUF --hf-file arch-function-chat-7b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Arch-Function-Chat-7B-Q6_K-GGUF --hf-file arch-function-chat-7b-q6_k.gguf -c 2048
```
|
timm/ViT-B-16-SigLIP-i18n-256 | timm | "2023-10-25T22:04:56Z" | 77,940 | 2 | open_clip | [
"open_clip",
"safetensors",
"clip",
"siglip",
"zero-shot-image-classification",
"dataset:webli",
"arxiv:2303.15343",
"license:apache-2.0",
"region:us"
] | zero-shot-image-classification | "2023-10-17T00:26:06Z" | ---
tags:
- clip
- siglip
library_name: open_clip
pipeline_tag: zero-shot-image-classification
license: apache-2.0
datasets:
- webli
---
# Model card for ViT-B-16-SigLIP-i18n-256
A SigLIP (Sigmoid loss for Language-Image Pre-training) model trained on WebLI.
This model has been converted to PyTorch from the original JAX checkpoints in [Big Vision](https://github.com/google-research/big_vision). These weights are usable in both OpenCLIP (image + text) and timm (image only).
## Model Details
- **Model Type:** Contrastive Image-Text, Zero-Shot Image Classification.
- **Original:** https://github.com/google-research/big_vision
- **Dataset:** WebLI
- **Papers:**
- Sigmoid loss for language image pre-training: https://arxiv.org/abs/2303.15343
## Model Usage
### With OpenCLIP
```
import torch
import torch.nn.functional as F
from urllib.request import urlopen
from PIL import Image
from open_clip import create_model_from_pretrained, get_tokenizer # works on open-clip-torch>=2.23.0, timm>=0.9.8
model, preprocess = create_model_from_pretrained('hf-hub:timm/ViT-B-16-SigLIP-i18n-256')
tokenizer = get_tokenizer('hf-hub:timm/ViT-B-16-SigLIP-i18n-256')
image = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
image = preprocess(image).unsqueeze(0)
labels_list = ["a dog", "a cat", "a donut", "a beignet"]
text = tokenizer(labels_list, context_length=model.context_length)
with torch.no_grad(), torch.cuda.amp.autocast():
image_features = model.encode_image(image)
text_features = model.encode_text(text)
image_features = F.normalize(image_features, dim=-1)
text_features = F.normalize(text_features, dim=-1)
text_probs = torch.sigmoid(image_features @ text_features.T * model.logit_scale.exp() + model.logit_bias)
zipped_list = list(zip(labels_list, [round(p.item(), 3) for p in text_probs[0]]))
print("Label probabilities: ", zipped_list)
```
### With `timm` (for image embeddings)
```python
from urllib.request import urlopen
from PIL import Image
import timm
image = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_base_patch16_siglip_256',
pretrained=True,
num_classes=0,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(image).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
```
## Citation
```bibtex
@article{zhai2023sigmoid,
title={Sigmoid loss for language image pre-training},
author={Zhai, Xiaohua and Mustafa, Basil and Kolesnikov, Alexander and Beyer, Lucas},
journal={arXiv preprint arXiv:2303.15343},
year={2023}
}
```
```bibtex
@misc{big_vision,
author = {Beyer, Lucas and Zhai, Xiaohua and Kolesnikov, Alexander},
title = {Big Vision},
year = {2022},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/google-research/big_vision}}
}
```
|
mradermacher/Mistral-Small-24B-Instruct-2501-reasoning-i1-GGUF | mradermacher | "2025-02-23T21:36:46Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"reasoning",
"en",
"dataset:open-r1/OpenR1-Math-220k",
"dataset:yentinglin/s1K-1.1-trl-format",
"dataset:simplescaling/s1K-1.1",
"base_model:yentinglin/Mistral-Small-24B-Instruct-2501-reasoning",
"base_model:quantized:yentinglin/Mistral-Small-24B-Instruct-2501-reasoning",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-02-23T12:22:30Z" | ---
base_model: yentinglin/Mistral-Small-24B-Instruct-2501-reasoning
datasets:
- open-r1/OpenR1-Math-220k
- yentinglin/s1K-1.1-trl-format
- simplescaling/s1K-1.1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- reasoning
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/yentinglin/Mistral-Small-24B-Instruct-2501-reasoning
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-reasoning-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-reasoning-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501-reasoning.i1-IQ1_S.gguf) | i1-IQ1_S | 5.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-reasoning-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501-reasoning.i1-IQ1_M.gguf) | i1-IQ1_M | 5.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-reasoning-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501-reasoning.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-reasoning-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501-reasoning.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-reasoning-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501-reasoning.i1-IQ2_S.gguf) | i1-IQ2_S | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-reasoning-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501-reasoning.i1-IQ2_M.gguf) | i1-IQ2_M | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-reasoning-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501-reasoning.i1-Q2_K_S.gguf) | i1-Q2_K_S | 8.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-reasoning-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501-reasoning.i1-Q2_K.gguf) | i1-Q2_K | 9.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-reasoning-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501-reasoning.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-reasoning-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501-reasoning.i1-IQ3_XS.gguf) | i1-IQ3_XS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-reasoning-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501-reasoning.i1-Q3_K_S.gguf) | i1-Q3_K_S | 10.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-reasoning-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501-reasoning.i1-IQ3_S.gguf) | i1-IQ3_S | 10.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-reasoning-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501-reasoning.i1-IQ3_M.gguf) | i1-IQ3_M | 10.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-reasoning-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501-reasoning.i1-Q3_K_M.gguf) | i1-Q3_K_M | 11.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-reasoning-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501-reasoning.i1-Q3_K_L.gguf) | i1-Q3_K_L | 12.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-reasoning-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501-reasoning.i1-IQ4_XS.gguf) | i1-IQ4_XS | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-reasoning-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501-reasoning.i1-Q4_0.gguf) | i1-Q4_0 | 13.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-reasoning-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501-reasoning.i1-Q4_K_S.gguf) | i1-Q4_K_S | 13.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-reasoning-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501-reasoning.i1-Q4_K_M.gguf) | i1-Q4_K_M | 14.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-reasoning-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501-reasoning.i1-Q4_1.gguf) | i1-Q4_1 | 15.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-reasoning-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501-reasoning.i1-Q5_K_S.gguf) | i1-Q5_K_S | 16.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-reasoning-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501-reasoning.i1-Q5_K_M.gguf) | i1-Q5_K_M | 16.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-24B-Instruct-2501-reasoning-i1-GGUF/resolve/main/Mistral-Small-24B-Instruct-2501-reasoning.i1-Q6_K.gguf) | i1-Q6_K | 19.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
havinash-ai/c3c8c374-b5a4-4c0a-914e-5106e1150532 | havinash-ai | "2025-01-26T15:12:15Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Hermes-3-Llama-3.1-8B",
"base_model:adapter:unsloth/Hermes-3-Llama-3.1-8B",
"region:us"
] | null | "2025-01-26T15:08:02Z" | ---
library_name: peft
base_model: unsloth/Hermes-3-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c3c8c374-b5a4-4c0a-914e-5106e1150532
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Hermes-3-Llama-3.1-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5b3f3bf50a162db2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5b3f3bf50a162db2_train_data.json
type:
field_input: determiner
field_instruction: ori_sentence
field_output: new_sentence
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: havinash-ai/c3c8c374-b5a4-4c0a-914e-5106e1150532
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/5b3f3bf50a162db2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a78de5fb-c4de-47be-9681-3547984f2234
wandb_project: Mine-SN56-2-Gradients-On-Demand
wandb_run: your_name
wandb_runid: a78de5fb-c4de-47be-9681-3547984f2234
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c3c8c374-b5a4-4c0a-914e-5106e1150532
This model is a fine-tuned version of [unsloth/Hermes-3-Llama-3.1-8B](https://huggingface.co/unsloth/Hermes-3-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0002 | 1 | nan |
| 0.0 | 0.0007 | 3 | nan |
| 0.0 | 0.0014 | 6 | nan |
| 0.0 | 0.0021 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
fernandoruiz/ALIA-40b-Q4_0-GGUF | fernandoruiz | "2025-01-25T18:02:02Z" | 20 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"bg",
"ca",
"code",
"cs",
"cy",
"da",
"de",
"el",
"en",
"es",
"et",
"eu",
"fi",
"fr",
"ga",
"gl",
"hr",
"hu",
"it",
"lt",
"lv",
"mt",
"nl",
"nn",
"oc",
"pl",
"pt",
"ro",
"ru",
"sh",
"sk",
"sl",
"sr",
"sv",
"uk",
"dataset:oscar-corpus/colossal-oscar-1.0",
"dataset:HuggingFaceFW/fineweb-edu",
"dataset:joelniklaus/eurlex_resources",
"dataset:joelniklaus/legal-mc4",
"dataset:projecte-aina/CATalog",
"dataset:UFRGS/brwac",
"dataset:community-datasets/hrwac",
"dataset:danish-foundation-models/danish-gigaword",
"dataset:HiTZ/euscrawl",
"dataset:PleIAs/French-PD-Newspapers",
"dataset:PleIAs/French-PD-Books",
"dataset:AI-team-UoA/greek_legal_code",
"dataset:HiTZ/latxa-corpus-v1.1",
"dataset:allenai/peS2o",
"dataset:pile-of-law/pile-of-law",
"dataset:PORTULAN/parlamento-pt",
"dataset:hoskinson-center/proof-pile",
"dataset:togethercomputer/RedPajama-Data-1T",
"dataset:bigcode/starcoderdata",
"dataset:bjoernp/tagesschau-2018-2023",
"dataset:EleutherAI/the_pile_deduplicated",
"base_model:BSC-LT/ALIA-40b",
"base_model:quantized:BSC-LT/ALIA-40b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-25T18:00:15Z" | ---
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation
language:
- bg
- ca
- code
- cs
- cy
- da
- de
- el
- en
- es
- et
- eu
- fi
- fr
- ga
- gl
- hr
- hu
- it
- lt
- lv
- mt
- nl
- nn
- \no
- oc
- pl
- pt
- ro
- ru
- sh
- sk
- sl
- sr
- sv
- uk
datasets:
- oscar-corpus/colossal-oscar-1.0
- HuggingFaceFW/fineweb-edu
- joelniklaus/eurlex_resources
- joelniklaus/legal-mc4
- projecte-aina/CATalog
- UFRGS/brwac
- community-datasets/hrwac
- danish-foundation-models/danish-gigaword
- HiTZ/euscrawl
- PleIAs/French-PD-Newspapers
- PleIAs/French-PD-Books
- AI-team-UoA/greek_legal_code
- HiTZ/latxa-corpus-v1.1
- allenai/peS2o
- pile-of-law/pile-of-law
- PORTULAN/parlamento-pt
- hoskinson-center/proof-pile
- togethercomputer/RedPajama-Data-1T
- bigcode/starcoderdata
- bjoernp/tagesschau-2018-2023
- EleutherAI/the_pile_deduplicated
base_model: BSC-LT/ALIA-40b
tags:
- llama-cpp
- gguf-my-repo
---
# fernandoruiz/ALIA-40b-Q4_0-GGUF
This model was converted to GGUF format from [`BSC-LT/ALIA-40b`](https://huggingface.co/BSC-LT/ALIA-40b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/BSC-LT/ALIA-40b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo fernandoruiz/ALIA-40b-Q4_0-GGUF --hf-file alia-40b-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo fernandoruiz/ALIA-40b-Q4_0-GGUF --hf-file alia-40b-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo fernandoruiz/ALIA-40b-Q4_0-GGUF --hf-file alia-40b-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo fernandoruiz/ALIA-40b-Q4_0-GGUF --hf-file alia-40b-q4_0.gguf -c 2048
```
|
t1msan/convnext-large-384-22k-1k-finetuned-eurosat | t1msan | "2024-04-16T18:43:38Z" | 193 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"convnext",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/convnext-large-384-22k-1k",
"base_model:finetune:facebook/convnext-large-384-22k-1k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-04-15T17:20:29Z" | ---
license: apache-2.0
base_model: facebook/convnext-large-384-22k-1k
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: convnext-large-384-22k-1k-finetuned-eurosat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-large-384-22k-1k-finetuned-eurosat
This model is a fine-tuned version of [facebook/convnext-large-384-22k-1k](https://huggingface.co/facebook/convnext-large-384-22k-1k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0012
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2737 | 0.97 | 22 | 0.0849 |
| 0.0416 | 1.98 | 45 | 0.0055 |
| 0.0096 | 2.99 | 68 | 0.0012 |
| 0.0018 | 3.87 | 88 | 0.0029 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
tomaszki/gemma-35-copy | tomaszki | "2024-03-13T10:05:31Z" | 90 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-13T10:03:29Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bogeumkim/mistral-7b-ko-example | bogeumkim | "2023-12-19T06:45:40Z" | 4 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | "2023-12-19T06:43:40Z" | ---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
mradermacher/Umbra-v2.1-MoE-4x10.7-GGUF | mradermacher | "2024-12-16T04:13:36Z" | 26 | 1 | transformers | [
"transformers",
"gguf",
"moe",
"merge",
"mergekit",
"Solar Moe",
"Solar",
"Umbra",
"en",
"base_model:SteelStorage/Umbra-v2.1-MoE-4x10.7",
"base_model:quantized:SteelStorage/Umbra-v2.1-MoE-4x10.7",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-03-03T10:11:15Z" | ---
base_model: SteelStorage/Umbra-v2.1-MoE-4x10.7
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- moe
- merge
- mergekit
- Solar Moe
- Solar
- Umbra
---
## About
static quants of https://huggingface.co/SteelStorage/Umbra-v2.1-MoE-4x10.7
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Umbra-v2.1-MoE-4x10.7-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Umbra-v2.1-MoE-4x10.7-GGUF/resolve/main/Umbra-v2.1-MoE-4x10.7.Q2_K.gguf) | Q2_K | 13.4 | |
| [GGUF](https://huggingface.co/mradermacher/Umbra-v2.1-MoE-4x10.7-GGUF/resolve/main/Umbra-v2.1-MoE-4x10.7.IQ3_XS.gguf) | IQ3_XS | 14.9 | |
| [GGUF](https://huggingface.co/mradermacher/Umbra-v2.1-MoE-4x10.7-GGUF/resolve/main/Umbra-v2.1-MoE-4x10.7.Q3_K_S.gguf) | Q3_K_S | 15.8 | |
| [GGUF](https://huggingface.co/mradermacher/Umbra-v2.1-MoE-4x10.7-GGUF/resolve/main/Umbra-v2.1-MoE-4x10.7.IQ3_S.gguf) | IQ3_S | 15.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Umbra-v2.1-MoE-4x10.7-GGUF/resolve/main/Umbra-v2.1-MoE-4x10.7.IQ3_M.gguf) | IQ3_M | 16.1 | |
| [GGUF](https://huggingface.co/mradermacher/Umbra-v2.1-MoE-4x10.7-GGUF/resolve/main/Umbra-v2.1-MoE-4x10.7.Q3_K_M.gguf) | Q3_K_M | 17.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Umbra-v2.1-MoE-4x10.7-GGUF/resolve/main/Umbra-v2.1-MoE-4x10.7.Q3_K_L.gguf) | Q3_K_L | 19.0 | |
| [GGUF](https://huggingface.co/mradermacher/Umbra-v2.1-MoE-4x10.7-GGUF/resolve/main/Umbra-v2.1-MoE-4x10.7.IQ4_XS.gguf) | IQ4_XS | 19.7 | |
| [GGUF](https://huggingface.co/mradermacher/Umbra-v2.1-MoE-4x10.7-GGUF/resolve/main/Umbra-v2.1-MoE-4x10.7.Q4_K_S.gguf) | Q4_K_S | 20.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Umbra-v2.1-MoE-4x10.7-GGUF/resolve/main/Umbra-v2.1-MoE-4x10.7.Q4_K_M.gguf) | Q4_K_M | 22.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Umbra-v2.1-MoE-4x10.7-GGUF/resolve/main/Umbra-v2.1-MoE-4x10.7.Q5_K_S.gguf) | Q5_K_S | 25.1 | |
| [GGUF](https://huggingface.co/mradermacher/Umbra-v2.1-MoE-4x10.7-GGUF/resolve/main/Umbra-v2.1-MoE-4x10.7.Q5_K_M.gguf) | Q5_K_M | 25.9 | |
| [GGUF](https://huggingface.co/mradermacher/Umbra-v2.1-MoE-4x10.7-GGUF/resolve/main/Umbra-v2.1-MoE-4x10.7.Q6_K.gguf) | Q6_K | 29.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Umbra-v2.1-MoE-4x10.7-GGUF/resolve/main/Umbra-v2.1-MoE-4x10.7.Q8_0.gguf) | Q8_0 | 38.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
solidrust/CerebrumHyperion-7B-DPO-AWQ | solidrust | "2024-09-03T08:42:30Z" | 76 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Locutusque/OpenCerebrum-1.0-7b-DPO",
"Locutusque/Hyperion-3.0-Mistral-7B-DPO",
"quantized",
"4-bit",
"AWQ",
"pytorch",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"chatml",
"en",
"base_model:hydra-project/CerebrumHyperion-7B-DPO",
"base_model:quantized:hydra-project/CerebrumHyperion-7B-DPO",
"awq",
"region:us"
] | text-generation | "2024-03-28T02:26:25Z" | ---
base_model: hydra-project/CerebrumHyperion-7B-DPO
inference: false
language:
- en
merged_models:
- Locutusque/OpenCerebrum-1.0-7b-DPO
- Locutusque/Hyperion-3.0-Mistral-7B-DPO
model_creator: hydra-project
model_name: CerebrumHyperion-7B-DPO
pipeline_tag: text-generation
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: Suparious
tags:
- merge
- mergekit
- lazymergekit
- Locutusque/OpenCerebrum-1.0-7b-DPO
- Locutusque/Hyperion-3.0-Mistral-7B-DPO
- quantized
- 4-bit
- AWQ
- transformers
- pytorch
- mistral
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- chatml
---
# hydra-project/CerebrumHyperion-7B-DPO AWQ
- Model creator: [hydra-project](https://huggingface.co/hydra-project)
- Original model: [CerebrumHyperion-7B-DPO](https://huggingface.co/hydra-project/CerebrumHyperion-7B-DPO)
## Model Summary
CerebrumHyperion-7B-DPO is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Locutusque/OpenCerebrum-1.0-7b-DPO](https://huggingface.co/Locutusque/OpenCerebrum-1.0-7b-DPO)
* [Locutusque/Hyperion-3.0-Mistral-7B-DPO](https://huggingface.co/Locutusque/Hyperion-3.0-Mistral-7B-DPO)
## How to use
### Install the necessary packages
```bash
pip install --upgrade autoawq autoawq-kernels
```
### Example Python code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer
model_path = "solidrust/CerebrumHyperion-7B-DPO-AWQ"
system_message = "You are Cerebrum, incarnated a powerful AI."
# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
trust_remote_code=True)
streamer = TextStreamer(tokenizer,
skip_prompt=True,
skip_special_tokens=True)
# Convert prompt to tokens
prompt_template = """\
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""
prompt = "You're standing on the surface of the Earth. "\
"You walk one mile south, one mile west and one mile north. "\
"You end up exactly where you started. Where are you?"
tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
return_tensors='pt').input_ids.cuda()
# Generate output
generation_output = model.generate(tokens,
streamer=streamer,
max_new_tokens=512)
```
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
## Prompt template: ChatML
```plaintext
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
|
vsisik/speecht5_tts_SK_v3 | vsisik | "2024-06-19T19:06:59Z" | 3 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"sk",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | "2024-04-02T18:24:20Z" | ---
language:
- sk
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Slovak v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Slovak v3
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4046
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:-----:|:---------------:|
| 0.5259 | 3.2258 | 500 | 0.4625 |
| 0.4823 | 6.4516 | 1000 | 0.4345 |
| 0.4702 | 9.6774 | 1500 | 0.4258 |
| 0.4502 | 12.9032 | 2000 | 0.4189 |
| 0.4579 | 16.1290 | 2500 | 0.4173 |
| 0.4418 | 19.3548 | 3000 | 0.4134 |
| 0.448 | 22.5806 | 3500 | 0.4117 |
| 0.4467 | 25.8065 | 4000 | 0.4094 |
| 0.4388 | 29.0323 | 4500 | 0.4084 |
| 0.4327 | 32.2581 | 5000 | 0.4071 |
| 0.4398 | 35.4839 | 5500 | 0.4069 |
| 0.4381 | 38.7097 | 6000 | 0.4065 |
| 0.4357 | 41.9355 | 6500 | 0.4053 |
| 0.4352 | 45.1613 | 7000 | 0.4059 |
| 0.4298 | 48.3871 | 7500 | 0.4050 |
| 0.4293 | 51.6129 | 8000 | 0.4043 |
| 0.4342 | 54.8387 | 8500 | 0.4050 |
| 0.4309 | 58.0645 | 9000 | 0.4045 |
| 0.4277 | 61.2903 | 9500 | 0.4047 |
| 0.4319 | 64.5161 | 10000 | 0.4046 |
### Framework versions
- Transformers 4.42.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Niggendar/hadrianDelicexl_v06a | Niggendar | "2024-05-09T09:21:59Z" | 141 | 2 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-05-09T09:16:23Z" | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Vivian12300/p2 | Vivian12300 | "2025-01-24T04:09:04Z" | 11 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-24T04:06:04Z" | ---
base_model: meta-llama/Meta-Llama-3.1-8B
library_name: transformers
model_name: p2
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for p2
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Vivian12300/p2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.13.0
- Transformers: 4.47.1
- Pytorch: 2.5.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
kokovova/fddbb4b7-7f8e-4b12-98c4-62585322f21b | kokovova | "2025-01-23T10:19:19Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:numind/NuExtract-1.5",
"base_model:adapter:numind/NuExtract-1.5",
"license:mit",
"region:us"
] | null | "2025-01-23T10:00:20Z" | ---
library_name: peft
license: mit
base_model: numind/NuExtract-v1.5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fddbb4b7-7f8e-4b12-98c4-62585322f21b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: numind/NuExtract-v1.5
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 24399e229df13d88_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/24399e229df13d88_train_data.json
type:
field_instruction: prompt
field_output: data
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: kokovova/fddbb4b7-7f8e-4b12-98c4-62585322f21b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/24399e229df13d88_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b46cf0ce-f552-4c62-84aa-c038718cbc16
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b46cf0ce-f552-4c62-84aa-c038718cbc16
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# fddbb4b7-7f8e-4b12-98c4-62585322f21b
This model is a fine-tuned version of [numind/NuExtract-v1.5](https://huggingface.co/numind/NuExtract-v1.5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3761
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0003 | 1 | 2.3979 |
| 6.9414 | 0.0016 | 5 | 2.1509 |
| 6.4925 | 0.0032 | 10 | 1.8020 |
| 6.2157 | 0.0048 | 15 | 1.5555 |
| 5.7934 | 0.0064 | 20 | 1.4257 |
| 6.1225 | 0.0080 | 25 | 1.3838 |
| 5.6566 | 0.0095 | 30 | 1.3761 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
davidmaestrecic/agency_cic_model-david-2-lora | davidmaestrecic | "2025-04-12T05:15:53Z" | 2 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/DeepSeek-R1-Distill-Qwen-7B-unsloth-bnb-4bit",
"base_model:adapter:unsloth/DeepSeek-R1-Distill-Qwen-7B-unsloth-bnb-4bit",
"region:us"
] | null | "2025-03-09T04:02:22Z" | ---
base_model: unsloth/DeepSeek-R1-Distill-Qwen-7B-unsloth-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
facebook/mms-tts-wsg | facebook | "2023-09-01T10:56:09Z" | 107 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | text-to-speech | "2023-09-01T10:55:52Z" |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Gondi, Adilabad Text-to-Speech
This repository contains the **Gondi, Adilabad (wsg)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-wsg")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-wsg")
text = "some example text in the Gondi, Adilabad language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
gavrilstep/8dafd58b-ca85-4062-9bc7-7501050d9dfb | gavrilstep | "2025-01-29T05:56:48Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"phi",
"axolotl",
"generated_from_trainer",
"base_model:microsoft/phi-1_5",
"base_model:adapter:microsoft/phi-1_5",
"license:mit",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-29T05:54:52Z" | ---
library_name: peft
license: mit
base_model: microsoft/phi-1_5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8dafd58b-ca85-4062-9bc7-7501050d9dfb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/phi-1_5
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8761f2b4c663324e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8761f2b4c663324e_train_data.json
type:
field_input: Article Content
field_instruction: Question
field_output: Answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: gavrilstep/8dafd58b-ca85-4062-9bc7-7501050d9dfb
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/8761f2b4c663324e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 542ef2a5-6717-4111-9cb4-d9bb7d2c34d1
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 542ef2a5-6717-4111-9cb4-d9bb7d2c34d1
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 8dafd58b-ca85-4062-9bc7-7501050d9dfb
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5017
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0020 | 1 | 1.8587 |
| 1.7489 | 0.0099 | 5 | 1.8316 |
| 1.8207 | 0.0198 | 10 | 1.7020 |
| 1.5893 | 0.0296 | 15 | 1.5995 |
| 1.4091 | 0.0395 | 20 | 1.5290 |
| 1.4899 | 0.0494 | 25 | 1.5069 |
| 1.6132 | 0.0593 | 30 | 1.5017 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso08/a123bf9e-c4c6-425c-8499-6154cc7fb29e | lesso08 | "2025-03-05T12:20:34Z" | 26 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NovaSearch/stella_en_1.5B_v5",
"base_model:adapter:NovaSearch/stella_en_1.5B_v5",
"license:mit",
"region:us"
] | null | "2025-03-04T17:39:11Z" | ---
library_name: peft
license: mit
base_model: dunzhang/stella_en_1.5B_v5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a123bf9e-c4c6-425c-8499-6154cc7fb29e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# a123bf9e-c4c6-425c-8499-6154cc7fb29e
This model is a fine-tuned version of [dunzhang/stella_en_1.5B_v5](https://huggingface.co/dunzhang/stella_en_1.5B_v5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000208
- train_batch_size: 4
- eval_batch_size: 4
- seed: 80
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0006 | 1 | nan |
| 0.0 | 0.2828 | 500 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
KuanP/continual-pretrain-a100_large_epoch-lr2e-5-cw10.0-lg0.5.new_2024-11-01_fold_2 | KuanP | "2024-11-02T00:29:26Z" | 34 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-11-02T00:29:18Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/zake7749_-_Llama-3.2-1B-it-chinese-kyara-awq | RichardErkhov | "2024-12-22T08:37:23Z" | 5 | 1 | null | [
"safetensors",
"llama",
"4-bit",
"awq",
"region:us"
] | null | "2024-12-22T08:35:45Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3.2-1B-it-chinese-kyara - AWQ
- Model creator: https://huggingface.co/zake7749/
- Original model: https://huggingface.co/zake7749/Llama-3.2-1B-it-chinese-kyara/
Original model description:
---
library_name: transformers
license: cc-by-nc-4.0
language:
- en
- zh
base_model:
- meta-llama/Llama-3.2-1B-Instruct
pipeline_tag: text-generation
---
# Kyara: Knowledge Yielding Adaptive Retrieval Augmentation for LLM Fine-tuning
[](https://zenodo.org/badge/latestdoi/844304447)
<p align="left">
🤗 <a href="https://huggingface.co/zake7749/Llama-3.2-1B-it-chinese-kyara/">Hugging Face</a> | 🚀<a href="https://github.com/zake7749/kyara">Github</a> | 📑 <a href="#">Paper</a> | 📖 <a href="https://github.com/zake7749/kyara/blob/main/document/README_EN.md">English</a> | 📖 <a href="https://github.com/zake7749/kyara">Chinese</a> | 💻 <a href="https://www.kaggle.com/code/zake7749/kyara-a-compact-yet-powerful-chinese-llm">Kaggle Notebook</a>
</p>
<div style="text-align: center;">
<img src="https://i.imgur.com/QiWlcYJ.jpeg" alt="kyara"/>
</div>
Kyara (Knowledge Yielding Adaptive Retrieval Augmentation) is an experimental project aimed at improving language models through knowledge retrieval processes. The project seeks to enhance the model’s ability to adapt knowledge and improve language comprehension, particularly in underrepresented languages like Traditional Chinese. Given the relatively scarce availability of Traditional Chinese data compared to the vast corpus of English data used for model training, Kyara addresses this gap by expanding the limited corpus for this language.
This is a preview model, with the stable version set to be released soon.
## Benchmark
All evaluations are conducted in a zero-shot setting.
| Metric | Kyara-1b-it | Llama3.2-1b-it |
|--------------------------|----------|-------------|
| **[TMMLUPlus](https://huggingface.co/datasets/ikala/tmmluplus)** | **31.92** | 30.48 |
|  - STEM | **32.56** | 29.74 |
|  - Humanities | **30.60** | 29.89 |
|  - Other | **31.08** | 30.32 |
|  - Social-Science | **33.42** | 31.98 |
| **[MMLU-Redux](https://github.com/yuchenlin/ZeroEval)** | **41.40** | 19.62⁺ |
| **[GSM8K](https://github.com/yuchenlin/ZeroEval)** | 31.31 | **31.61** |
| **[MATH-L5](https://github.com/yuchenlin/ZeroEval)** | **5.55** | 2.91 |
| **[CRUX](https://github.com/yuchenlin/ZeroEval)** | **14** | 11 |
| **[AlpacaEval](https://github.com/tatsu-lab/alpaca_eval)** | **10.79** | 7.39 |
⁺: Llama3.2-1b-it appears to have failed to follow the [output schema](https://github.com/WildEval/ZeroEval/blob/e3dd922cba9eeb8b76ed8212a81ee4cf6f30de2f/src/templates/MCQA.py) of ZeroEval on MMLU, with 45.28% of examples lacking answers, which has resulted in a lower MMLU score.
|
parrottygg/LlamaXV1 | parrottygg | "2024-11-04T16:54:19Z" | 37 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-04T16:49:37Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ernestum/sac-seals-Humanoid-v1 | ernestum | "2023-09-18T07:55:21Z" | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"seals/Humanoid-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-09-15T11:54:37Z" | ---
library_name: stable-baselines3
tags:
- seals/Humanoid-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: seals/Humanoid-v1
type: seals/Humanoid-v1
metrics:
- type: mean_reward
value: 367.48 +/- 59.61
name: mean_reward
verified: false
---
# **SAC** Agent playing **seals/Humanoid-v1**
This is a trained model of a **SAC** agent playing **seals/Humanoid-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo sac --env seals/Humanoid-v1 -orga ernestum -f logs/
python -m rl_zoo3.enjoy --algo sac --env seals/Humanoid-v1 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo sac --env seals/Humanoid-v1 -orga ernestum -f logs/
python -m rl_zoo3.enjoy --algo sac --env seals/Humanoid-v1 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo sac --env seals/Humanoid-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo sac --env seals/Humanoid-v1 -f logs/ -orga ernestum
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('buffer_size', 100000),
('gamma', 0.98),
('learning_rate', 4.426351861707874e-05),
('learning_starts', 20000),
('n_timesteps', 2000000.0),
('policy', 'MlpPolicy'),
('policy_kwargs',
{'log_std_init': -0.1034412732183072,
'net_arch': [400, 300],
'use_sde': False}),
('tau', 0.08),
('train_freq', 8),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
sujayC66/t5-base-finetuned-stocknews_2000_150 | sujayC66 | "2024-03-07T18:21:12Z" | 18 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-03-07T07:52:03Z" | ---
license: apache-2.0
base_model: google-t5/t5-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-base-finetuned-stocknews_2000_150
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-stocknews_2000_150
This model is a fine-tuned version of [google-t5/t5-base](https://huggingface.co/google-t5/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5246
- Rouge1: 41.1174
- Rouge2: 36.4917
- Rougel: 40.2739
- Rougelsum: 40.5043
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 150
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 211 | 0.4220 | 37.4081 | 29.7287 | 35.6792 | 36.0611 | 19.0 |
| No log | 2.0 | 422 | 0.4020 | 37.6979 | 30.5377 | 36.0747 | 36.4168 | 19.0 |
| 0.3832 | 3.0 | 633 | 0.3947 | 38.258 | 31.0862 | 36.5414 | 37.0213 | 19.0 |
| 0.3832 | 4.0 | 844 | 0.3850 | 38.4834 | 31.3747 | 36.8077 | 37.2317 | 19.0 |
| 0.2939 | 5.0 | 1055 | 0.3765 | 38.8131 | 32.3372 | 37.3919 | 37.7305 | 19.0 |
| 0.2939 | 6.0 | 1266 | 0.3762 | 39.1749 | 33.0152 | 37.6824 | 38.0201 | 19.0 |
| 0.2939 | 7.0 | 1477 | 0.3569 | 39.2336 | 32.9984 | 37.8439 | 38.1723 | 19.0 |
| 0.2511 | 8.0 | 1688 | 0.3551 | 39.452 | 33.6999 | 38.3731 | 38.5895 | 19.0 |
| 0.2511 | 9.0 | 1899 | 0.3523 | 39.8924 | 34.2746 | 38.6913 | 38.9944 | 19.0 |
| 0.2532 | 10.0 | 2110 | 0.3487 | 39.9155 | 34.2762 | 38.8052 | 39.077 | 19.0 |
| 0.2532 | 11.0 | 2321 | 0.3533 | 39.7805 | 34.2195 | 38.6591 | 38.9007 | 19.0 |
| 0.2158 | 12.0 | 2532 | 0.3529 | 39.6286 | 34.2772 | 38.5553 | 38.8225 | 19.0 |
| 0.2158 | 13.0 | 2743 | 0.3506 | 40.1899 | 35.0527 | 39.2227 | 39.4969 | 19.0 |
| 0.2158 | 14.0 | 2954 | 0.3474 | 40.666 | 35.5759 | 39.6311 | 39.9267 | 19.0 |
| 0.1882 | 15.0 | 3165 | 0.3488 | 40.4267 | 35.2551 | 39.2486 | 39.5608 | 19.0 |
| 0.1882 | 16.0 | 3376 | 0.3547 | 40.6478 | 35.5519 | 39.6034 | 39.8449 | 19.0 |
| 0.1612 | 17.0 | 3587 | 0.3616 | 40.7061 | 35.8348 | 39.8034 | 40.0508 | 19.0 |
| 0.1612 | 18.0 | 3798 | 0.3621 | 40.7052 | 35.8514 | 39.7689 | 40.0123 | 19.0 |
| 0.1434 | 19.0 | 4009 | 0.3632 | 40.5196 | 35.649 | 39.5977 | 39.8099 | 19.0 |
| 0.1434 | 20.0 | 4220 | 0.3667 | 40.8356 | 35.9832 | 39.9295 | 40.1647 | 19.0 |
| 0.1434 | 21.0 | 4431 | 0.3711 | 40.75 | 35.7893 | 39.7533 | 40.0671 | 19.0 |
| 0.1248 | 22.0 | 4642 | 0.3714 | 40.6404 | 35.8139 | 39.6508 | 39.9206 | 19.0 |
| 0.1248 | 23.0 | 4853 | 0.3720 | 40.596 | 35.7999 | 39.7515 | 39.9484 | 19.0 |
| 0.1097 | 24.0 | 5064 | 0.3766 | 40.6635 | 35.8029 | 39.8031 | 40.023 | 19.0 |
| 0.1097 | 25.0 | 5275 | 0.3841 | 40.6312 | 35.7811 | 39.7593 | 40.0159 | 19.0 |
| 0.1097 | 26.0 | 5486 | 0.3874 | 40.6912 | 35.85 | 39.7479 | 40.0379 | 19.0 |
| 0.0994 | 27.0 | 5697 | 0.3840 | 40.7263 | 35.9777 | 39.8711 | 40.1549 | 19.0 |
| 0.0994 | 28.0 | 5908 | 0.3935 | 40.7512 | 35.8443 | 39.7654 | 40.052 | 19.0 |
| 0.0877 | 29.0 | 6119 | 0.3942 | 40.801 | 35.9741 | 39.8594 | 40.0986 | 19.0 |
| 0.0877 | 30.0 | 6330 | 0.3977 | 40.9239 | 36.1363 | 40.0563 | 40.319 | 19.0 |
| 0.0786 | 31.0 | 6541 | 0.4009 | 40.8977 | 36.1534 | 40.0016 | 40.2385 | 19.0 |
| 0.0786 | 32.0 | 6752 | 0.3996 | 40.7816 | 36.1552 | 39.9214 | 40.1717 | 19.0 |
| 0.0786 | 33.0 | 6963 | 0.4023 | 40.9965 | 36.3464 | 40.1217 | 40.3481 | 19.0 |
| 0.0723 | 34.0 | 7174 | 0.4086 | 40.8352 | 36.1049 | 39.8852 | 40.142 | 19.0 |
| 0.0723 | 35.0 | 7385 | 0.4048 | 40.9399 | 36.2465 | 40.0545 | 40.3178 | 19.0 |
| 0.0654 | 36.0 | 7596 | 0.4097 | 40.9975 | 36.2784 | 40.0802 | 40.3726 | 19.0 |
| 0.0654 | 37.0 | 7807 | 0.4117 | 40.851 | 36.1677 | 40.0313 | 40.3027 | 19.0 |
| 0.0592 | 38.0 | 8018 | 0.4164 | 40.9427 | 36.2783 | 40.1323 | 40.4087 | 19.0 |
| 0.0592 | 39.0 | 8229 | 0.4187 | 40.6632 | 36.0088 | 39.8049 | 40.0361 | 19.0 |
| 0.0592 | 40.0 | 8440 | 0.4188 | 41.008 | 36.3243 | 40.1924 | 40.466 | 19.0 |
| 0.0557 | 41.0 | 8651 | 0.4244 | 40.887 | 36.2373 | 40.0544 | 40.3017 | 19.0 |
| 0.0557 | 42.0 | 8862 | 0.4219 | 40.8024 | 36.1323 | 39.9768 | 40.2685 | 19.0 |
| 0.0516 | 43.0 | 9073 | 0.4234 | 40.7758 | 36.1291 | 39.9284 | 40.1658 | 19.0 |
| 0.0516 | 44.0 | 9284 | 0.4268 | 40.8067 | 36.1192 | 39.9735 | 40.212 | 19.0 |
| 0.0516 | 45.0 | 9495 | 0.4229 | 40.8445 | 36.0577 | 39.9435 | 40.1851 | 19.0 |
| 0.0473 | 46.0 | 9706 | 0.4343 | 40.7118 | 36.1068 | 39.9453 | 40.1875 | 19.0 |
| 0.0473 | 47.0 | 9917 | 0.4311 | 40.7688 | 36.0953 | 39.9612 | 40.1921 | 19.0 |
| 0.0438 | 48.0 | 10128 | 0.4376 | 40.9327 | 36.2236 | 40.0164 | 40.2675 | 19.0 |
| 0.0438 | 49.0 | 10339 | 0.4360 | 41.0039 | 36.3548 | 40.0958 | 40.3716 | 19.0 |
| 0.0408 | 50.0 | 10550 | 0.4418 | 40.9386 | 36.3116 | 40.0052 | 40.2586 | 19.0 |
| 0.0408 | 51.0 | 10761 | 0.4436 | 41.0744 | 36.421 | 40.1518 | 40.4014 | 19.0 |
| 0.0408 | 52.0 | 10972 | 0.4427 | 41.1198 | 36.4495 | 40.2116 | 40.4505 | 19.0 |
| 0.0382 | 53.0 | 11183 | 0.4428 | 41.0544 | 36.4075 | 40.1852 | 40.4269 | 19.0 |
| 0.0382 | 54.0 | 11394 | 0.4468 | 41.0366 | 36.3513 | 40.1403 | 40.361 | 19.0 |
| 0.0354 | 55.0 | 11605 | 0.4463 | 40.9558 | 36.3748 | 40.1348 | 40.3447 | 19.0 |
| 0.0354 | 56.0 | 11816 | 0.4508 | 40.8857 | 36.3143 | 40.0455 | 40.2318 | 19.0 |
| 0.0338 | 57.0 | 12027 | 0.4544 | 40.8272 | 36.244 | 40.0023 | 40.2384 | 19.0 |
| 0.0338 | 58.0 | 12238 | 0.4555 | 40.9537 | 36.1908 | 40.0228 | 40.2483 | 19.0 |
| 0.0338 | 59.0 | 12449 | 0.4521 | 40.9258 | 36.1708 | 40.0611 | 40.3071 | 19.0 |
| 0.031 | 60.0 | 12660 | 0.4555 | 40.8837 | 36.147 | 40.0305 | 40.2382 | 19.0 |
| 0.031 | 61.0 | 12871 | 0.4566 | 40.9297 | 36.2576 | 40.09 | 40.2747 | 19.0 |
| 0.0307 | 62.0 | 13082 | 0.4562 | 40.8585 | 36.2582 | 40.0722 | 40.25 | 19.0 |
| 0.0307 | 63.0 | 13293 | 0.4592 | 40.9201 | 36.2751 | 40.0861 | 40.3269 | 19.0 |
| 0.0281 | 64.0 | 13504 | 0.4567 | 40.9232 | 36.2481 | 40.0753 | 40.3216 | 19.0 |
| 0.0281 | 65.0 | 13715 | 0.4606 | 41.0077 | 36.3489 | 40.1395 | 40.3744 | 19.0 |
| 0.0281 | 66.0 | 13926 | 0.4649 | 41.0042 | 36.5452 | 40.2019 | 40.4466 | 19.0 |
| 0.0263 | 67.0 | 14137 | 0.4674 | 40.9152 | 36.4575 | 40.2074 | 40.4128 | 19.0 |
| 0.0263 | 68.0 | 14348 | 0.4638 | 40.9942 | 36.4242 | 40.2192 | 40.4164 | 19.0 |
| 0.0258 | 69.0 | 14559 | 0.4652 | 41.0026 | 36.3871 | 40.1336 | 40.3569 | 19.0 |
| 0.0258 | 70.0 | 14770 | 0.4683 | 40.9275 | 36.4236 | 40.0798 | 40.3247 | 19.0 |
| 0.0258 | 71.0 | 14981 | 0.4729 | 40.9299 | 36.2989 | 40.1179 | 40.3533 | 19.0 |
| 0.0245 | 72.0 | 15192 | 0.4713 | 40.8745 | 36.2617 | 40.0829 | 40.3073 | 19.0 |
| 0.0245 | 73.0 | 15403 | 0.4720 | 40.9534 | 36.4602 | 40.1804 | 40.4279 | 19.0 |
| 0.0231 | 74.0 | 15614 | 0.4762 | 41.055 | 36.552 | 40.2672 | 40.5027 | 19.0 |
| 0.0231 | 75.0 | 15825 | 0.4776 | 40.939 | 36.492 | 40.1735 | 40.3718 | 19.0 |
| 0.0219 | 76.0 | 16036 | 0.4814 | 41.0543 | 36.6498 | 40.3146 | 40.5381 | 19.0 |
| 0.0219 | 77.0 | 16247 | 0.4826 | 41.0015 | 36.5925 | 40.2389 | 40.4813 | 19.0 |
| 0.0219 | 78.0 | 16458 | 0.4840 | 41.0486 | 36.6352 | 40.3106 | 40.5603 | 19.0 |
| 0.0213 | 79.0 | 16669 | 0.4848 | 40.9784 | 36.4886 | 40.1903 | 40.439 | 19.0 |
| 0.0213 | 80.0 | 16880 | 0.4910 | 41.175 | 36.6854 | 40.3474 | 40.5917 | 19.0 |
| 0.0204 | 81.0 | 17091 | 0.4843 | 41.0851 | 36.5354 | 40.3005 | 40.5392 | 19.0 |
| 0.0204 | 82.0 | 17302 | 0.4847 | 41.2714 | 36.6856 | 40.4516 | 40.672 | 19.0 |
| 0.0196 | 83.0 | 17513 | 0.4860 | 40.9692 | 36.3916 | 40.1273 | 40.3602 | 19.0 |
| 0.0196 | 84.0 | 17724 | 0.4870 | 40.9497 | 36.3933 | 40.1057 | 40.3926 | 19.0 |
| 0.0196 | 85.0 | 17935 | 0.4827 | 41.0823 | 36.5005 | 40.2376 | 40.4651 | 19.0 |
| 0.019 | 86.0 | 18146 | 0.4889 | 41.1902 | 36.6614 | 40.3848 | 40.6069 | 19.0 |
| 0.019 | 87.0 | 18357 | 0.4890 | 41.186 | 36.6136 | 40.4576 | 40.6462 | 19.0 |
| 0.0179 | 88.0 | 18568 | 0.4940 | 41.1593 | 36.5153 | 40.377 | 40.5727 | 19.0 |
| 0.0179 | 89.0 | 18779 | 0.4908 | 40.9712 | 36.43 | 40.1811 | 40.3797 | 19.0 |
| 0.0179 | 90.0 | 18990 | 0.4914 | 41.0358 | 36.4656 | 40.1936 | 40.4449 | 19.0 |
| 0.0176 | 91.0 | 19201 | 0.4924 | 40.8918 | 36.3329 | 40.0398 | 40.2895 | 19.0 |
| 0.0176 | 92.0 | 19412 | 0.4913 | 41.0889 | 36.3829 | 40.213 | 40.4163 | 19.0 |
| 0.0168 | 93.0 | 19623 | 0.4939 | 41.048 | 36.407 | 40.1863 | 40.4131 | 19.0 |
| 0.0168 | 94.0 | 19834 | 0.4996 | 41.0211 | 36.3687 | 40.1492 | 40.3375 | 19.0 |
| 0.016 | 95.0 | 20045 | 0.5000 | 40.8562 | 36.2496 | 39.9959 | 40.2259 | 19.0 |
| 0.016 | 96.0 | 20256 | 0.4989 | 41.0123 | 36.3468 | 40.1217 | 40.3407 | 19.0 |
| 0.016 | 97.0 | 20467 | 0.5004 | 41.0992 | 36.4577 | 40.1794 | 40.4175 | 19.0 |
| 0.0163 | 98.0 | 20678 | 0.5009 | 41.0319 | 36.3625 | 40.1331 | 40.3442 | 19.0 |
| 0.0163 | 99.0 | 20889 | 0.4978 | 40.8888 | 36.238 | 40.0311 | 40.2348 | 19.0 |
| 0.0154 | 100.0 | 21100 | 0.5059 | 40.9034 | 36.2802 | 40.033 | 40.2534 | 19.0 |
| 0.0154 | 101.0 | 21311 | 0.5026 | 41.0808 | 36.4192 | 40.211 | 40.4242 | 19.0 |
| 0.0148 | 102.0 | 21522 | 0.5043 | 41.1898 | 36.4732 | 40.3336 | 40.5495 | 19.0 |
| 0.0148 | 103.0 | 21733 | 0.5062 | 41.216 | 36.6109 | 40.408 | 40.6201 | 19.0 |
| 0.0148 | 104.0 | 21944 | 0.5076 | 40.9136 | 36.2326 | 40.043 | 40.274 | 19.0 |
| 0.0142 | 105.0 | 22155 | 0.5085 | 41.1476 | 36.5099 | 40.3444 | 40.5131 | 19.0 |
| 0.0142 | 106.0 | 22366 | 0.5087 | 41.1 | 36.4271 | 40.2888 | 40.4809 | 19.0 |
| 0.0137 | 107.0 | 22577 | 0.5083 | 40.8868 | 36.2128 | 40.0356 | 40.2519 | 19.0 |
| 0.0137 | 108.0 | 22788 | 0.5097 | 41.0436 | 36.4065 | 40.2004 | 40.4431 | 19.0 |
| 0.0137 | 109.0 | 22999 | 0.5113 | 41.1789 | 36.617 | 40.3938 | 40.5925 | 19.0 |
| 0.0137 | 110.0 | 23210 | 0.5127 | 40.989 | 36.3659 | 40.1097 | 40.3074 | 19.0 |
| 0.0137 | 111.0 | 23421 | 0.5144 | 41.0157 | 36.3607 | 40.1239 | 40.3237 | 19.0 |
| 0.0132 | 112.0 | 23632 | 0.5153 | 40.9412 | 36.3165 | 40.0601 | 40.283 | 19.0 |
| 0.0132 | 113.0 | 23843 | 0.5127 | 41.011 | 36.3343 | 40.1059 | 40.3317 | 19.0 |
| 0.0138 | 114.0 | 24054 | 0.5174 | 40.9507 | 36.3226 | 40.0426 | 40.2821 | 19.0 |
| 0.0138 | 115.0 | 24265 | 0.5172 | 40.9169 | 36.2471 | 40.0189 | 40.2581 | 19.0 |
| 0.0138 | 116.0 | 24476 | 0.5191 | 40.9621 | 36.2937 | 40.0859 | 40.2872 | 19.0 |
| 0.0129 | 117.0 | 24687 | 0.5164 | 40.9124 | 36.2428 | 40.0247 | 40.2636 | 19.0 |
| 0.0129 | 118.0 | 24898 | 0.5217 | 40.8482 | 36.2412 | 39.983 | 40.2084 | 19.0 |
| 0.0131 | 119.0 | 25109 | 0.5191 | 40.9377 | 36.3549 | 40.0702 | 40.303 | 19.0 |
| 0.0131 | 120.0 | 25320 | 0.5206 | 41.0878 | 36.5262 | 40.2577 | 40.4903 | 19.0 |
| 0.0123 | 121.0 | 25531 | 0.5223 | 40.9777 | 36.4348 | 40.1438 | 40.3255 | 19.0 |
| 0.0123 | 122.0 | 25742 | 0.5200 | 40.9512 | 36.2822 | 40.0795 | 40.2998 | 19.0 |
| 0.0123 | 123.0 | 25953 | 0.5244 | 40.9508 | 36.3301 | 40.0726 | 40.3256 | 19.0 |
| 0.0125 | 124.0 | 26164 | 0.5225 | 41.1733 | 36.4561 | 40.3336 | 40.5512 | 19.0 |
| 0.0125 | 125.0 | 26375 | 0.5240 | 41.0364 | 36.4154 | 40.189 | 40.4268 | 19.0 |
| 0.0118 | 126.0 | 26586 | 0.5246 | 41.1267 | 36.4904 | 40.3025 | 40.5672 | 19.0 |
| 0.0118 | 127.0 | 26797 | 0.5214 | 40.9609 | 36.417 | 40.1255 | 40.3472 | 19.0 |
| 0.0125 | 128.0 | 27008 | 0.5196 | 41.1335 | 36.4937 | 40.3248 | 40.5371 | 19.0 |
| 0.0125 | 129.0 | 27219 | 0.5214 | 41.1757 | 36.606 | 40.3908 | 40.6112 | 19.0 |
| 0.0125 | 130.0 | 27430 | 0.5190 | 41.1436 | 36.5116 | 40.344 | 40.5505 | 19.0 |
| 0.012 | 131.0 | 27641 | 0.5227 | 41.0854 | 36.5638 | 40.2975 | 40.5342 | 19.0 |
| 0.012 | 132.0 | 27852 | 0.5233 | 41.0652 | 36.5087 | 40.2447 | 40.4784 | 19.0 |
| 0.0117 | 133.0 | 28063 | 0.5251 | 41.1272 | 36.4621 | 40.2664 | 40.4917 | 19.0 |
| 0.0117 | 134.0 | 28274 | 0.5215 | 41.1819 | 36.5561 | 40.3583 | 40.5515 | 19.0 |
| 0.0117 | 135.0 | 28485 | 0.5219 | 41.1615 | 36.5308 | 40.323 | 40.5283 | 19.0 |
| 0.0116 | 136.0 | 28696 | 0.5228 | 41.0947 | 36.4701 | 40.2537 | 40.4725 | 19.0 |
| 0.0116 | 137.0 | 28907 | 0.5211 | 41.1187 | 36.4948 | 40.2711 | 40.4957 | 19.0 |
| 0.0114 | 138.0 | 29118 | 0.5219 | 41.0826 | 36.4684 | 40.2557 | 40.4678 | 19.0 |
| 0.0114 | 139.0 | 29329 | 0.5223 | 41.1453 | 36.5356 | 40.3132 | 40.5333 | 19.0 |
| 0.0111 | 140.0 | 29540 | 0.5237 | 41.1055 | 36.4938 | 40.2656 | 40.4907 | 19.0 |
| 0.0111 | 141.0 | 29751 | 0.5241 | 41.1391 | 36.4983 | 40.2896 | 40.5215 | 19.0 |
| 0.0111 | 142.0 | 29962 | 0.5243 | 41.1702 | 36.5621 | 40.3401 | 40.5579 | 19.0 |
| 0.0112 | 143.0 | 30173 | 0.5242 | 41.1499 | 36.5609 | 40.3355 | 40.5387 | 19.0 |
| 0.0112 | 144.0 | 30384 | 0.5236 | 41.1261 | 36.5274 | 40.3011 | 40.522 | 19.0 |
| 0.011 | 145.0 | 30595 | 0.5240 | 41.1174 | 36.4917 | 40.2739 | 40.5043 | 19.0 |
| 0.011 | 146.0 | 30806 | 0.5248 | 41.1174 | 36.4917 | 40.2739 | 40.5043 | 19.0 |
| 0.0106 | 147.0 | 31017 | 0.5241 | 41.1174 | 36.4917 | 40.2739 | 40.5043 | 19.0 |
| 0.0106 | 148.0 | 31228 | 0.5243 | 41.1174 | 36.4917 | 40.2739 | 40.5043 | 19.0 |
| 0.0106 | 149.0 | 31439 | 0.5245 | 41.1174 | 36.4917 | 40.2739 | 40.5043 | 19.0 |
| 0.0105 | 150.0 | 31650 | 0.5246 | 41.1174 | 36.4917 | 40.2739 | 40.5043 | 19.0 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.2
|
pfunk/CartPole-v1-CP_DQPN_x2-seed888 | pfunk | "2023-03-20T21:54:48Z" | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-20T21:54:45Z" | ---
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQPN_freq
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 102.39 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQPN_freq** Agent Playing **CartPole-v1**
This is a trained model of a DQPN_freq agent playing CartPole-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/CP_DQPN_x2.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[CP_DQPN_x2]"
python -m cleanrl_utils.enjoy --exp-name CP_DQPN_x2 --env-id CartPole-v1
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/CartPole-v1-CP_DQPN_x2-seed888/raw/main/dqpn_freq.py
curl -OL https://huggingface.co/pfunk/CartPole-v1-CP_DQPN_x2-seed888/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/CartPole-v1-CP_DQPN_x2-seed888/raw/main/poetry.lock
poetry install --all-extras
python dqpn_freq.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name CP_DQPN_x2 --policy-network-frequency 200 --seed 888
```
# Hyperparameters
```python
{'alg_type': 'dqpn_freq.py',
'batch_size': 256,
'buffer_size': 300000,
'capture_video': True,
'cuda': True,
'end_e': 0.1,
'env_id': 'CartPole-v1',
'exp_name': 'CP_DQPN_x2',
'exploration_fraction': 0.2,
'gamma': 1.0,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 1000,
'policy_network_frequency': 200,
'policy_tau': 1.0,
'save_model': True,
'seed': 888,
'start_e': 1.0,
'target_network_frequency': 100,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 500000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
Deaquay/T-Rex-mini-Q4_K_M-GGUF | Deaquay | "2025-04-08T16:25:42Z" | 0 | 0 | null | [
"gguf",
"roleplay",
"storytelling",
"language-model",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:saturated-labs/T-Rex-mini",
"base_model:quantized:saturated-labs/T-Rex-mini",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-08T16:25:19Z" | ---
base_model: saturated-labs/T-Rex-mini
language:
- en
license: llama3
tags:
- roleplay
- storytelling
- language-model
- llama-cpp
- gguf-my-repo
---
# Deaquay/T-Rex-mini-Q4_K_M-GGUF
This model was converted to GGUF format from [`saturated-labs/T-Rex-mini`](https://huggingface.co/saturated-labs/T-Rex-mini) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/saturated-labs/T-Rex-mini) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Deaquay/T-Rex-mini-Q4_K_M-GGUF --hf-file t-rex-mini-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Deaquay/T-Rex-mini-Q4_K_M-GGUF --hf-file t-rex-mini-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Deaquay/T-Rex-mini-Q4_K_M-GGUF --hf-file t-rex-mini-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Deaquay/T-Rex-mini-Q4_K_M-GGUF --hf-file t-rex-mini-q4_k_m.gguf -c 2048
```
|
Judge12138/ddpm-churchs-128 | Judge12138 | "2025-04-07T13:26:05Z" | 0 | 0 | null | [
"tensorboard",
"region:us"
] | null | "2025-04-07T12:05:36Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
jiinking/2_random_MQA_llama_model | jiinking | "2025-03-10T14:27:30Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-10T13:58:05Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
trenden/6f2d2629-8117-4a35-a726-4ed324e848bc | trenden | "2025-01-25T14:49:55Z" | 11 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:HuggingFaceH4/tiny-random-LlamaForCausalLM",
"base_model:adapter:HuggingFaceH4/tiny-random-LlamaForCausalLM",
"region:us"
] | null | "2025-01-25T14:47:01Z" | ---
library_name: peft
base_model: HuggingFaceH4/tiny-random-LlamaForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6f2d2629-8117-4a35-a726-4ed324e848bc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: HuggingFaceH4/tiny-random-LlamaForCausalLM
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0f07be21f013bdc5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0f07be21f013bdc5_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: trenden/6f2d2629-8117-4a35-a726-4ed324e848bc
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/0f07be21f013bdc5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7199ac6c-77d7-4b63-90db-d137268a55ae
wandb_project: Birthday-SN56-3-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7199ac6c-77d7-4b63-90db-d137268a55ae
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 6f2d2629-8117-4a35-a726-4ed324e848bc
This model is a fine-tuned version of [HuggingFaceH4/tiny-random-LlamaForCausalLM](https://huggingface.co/HuggingFaceH4/tiny-random-LlamaForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.3768
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 10.3721 | 0.0000 | 1 | 10.3772 |
| 10.3819 | 0.0001 | 3 | 10.3772 |
| 10.3765 | 0.0002 | 6 | 10.3771 |
| 10.3822 | 0.0003 | 9 | 10.3768 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nielsr/swin-tiny-patch4-window7-224-finetuned-eurosat-kornia | nielsr | "2023-09-12T18:35:03Z" | 221 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2022-08-29T08:52:19Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
base_model: microsoft/swin-tiny-patch4-window7-224
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat-kornia
results:
- task:
type: image-classification
name: Image Classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- type: accuracy
value: 0.9829629629629629
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat-kornia
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0540
- Accuracy: 0.9830
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0859 | 1.0 | 190 | 0.0969 | 0.9685 |
| 0.0664 | 2.0 | 380 | 0.0627 | 0.9815 |
| 0.0359 | 3.0 | 570 | 0.0540 | 0.9830 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
GPTCache/paraphrase-albert-onnx | GPTCache | "2023-04-05T08:37:38Z" | 0 | 0 | null | [
"onnx",
"feature-extraction",
"en",
"license:mit",
"region:us"
] | feature-extraction | "2023-04-05T06:23:50Z" | ---
license: mit
language:
- en
pipeline_tag: feature-extraction
--- |
nandinib1999/quote-generator | nandinib1999 | "2022-03-06T12:04:44Z" | 68 | 3 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"text generation",
"en",
"dataset:quotes-500K",
"license:cc",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
language:
- en
thumbnail:
tags:
- text generation
license: cc
datasets:
- quotes-500K
metrics:
- perplexity
---
# Quotes Generator
## Model description
This is a GPT2 model fine-tuned on the Quotes-500K dataset.
## Intended uses & limitations
For a given user prompt, it can generate motivational quotes starting with it.
#### How to use
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("nandinib1999/quote-generator")
model = AutoModelWithLMHead.from_pretrained("nandinib1999/quote-generator")
```
## Training data
This is the distribution of the total dataset into training, validation and test dataset for the fine-tuning task.
<table style="width:30%">
<tr>
<th>train</th>
<td>349796</td>
</tr>
<tr>
<th>validation</th>
<td>99942</td>
</tr>
<tr>
<th>test</th>
<td>49971</td>
</tr>
</table>
## Training procedure
The model was fine-tuned using the Google Colab GPU for one epoch. The weights of the pre-trained GPT2 model were used as a base.
## Eval results
<table style="width:30%">
<tr>
<th>Epoch</th>
<th>Perplexity</th>
</tr>
<tr>
<td>1</td>
<td>15.180</td>
</tr>
</table> |
15e/slu | 15e | "2025-04-07T17:18:02Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-04-07T17:07:22Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: SLU
---
# Slu
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `SLU` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "SLU",
"lora_weights": "https://huggingface.co/15e/slu/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('15e/slu', weight_name='lora.safetensors')
image = pipeline('SLU').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/15e/slu/discussions) to add images that show off what you’ve made with this LoRA.
|
RichardErkhov/ABX-AI_-_Spicy-Laymonade-7B-8bits | RichardErkhov | "2024-05-01T14:01:25Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-05-01T13:51:01Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Spicy-Laymonade-7B - bnb 8bits
- Model creator: https://huggingface.co/ABX-AI/
- Original model: https://huggingface.co/ABX-AI/Spicy-Laymonade-7B/
Original model description:
---
base_model:
- cgato/TheSpice-7b-v0.1.1
- ABX-AI/Laymonade-7B
library_name: transformers
tags:
- mergekit
- merge
- not-for-all-audiences
license: other
---
GGUF: https://huggingface.co/ABX-AI/Spicy-Laymonade-7B-GGUF-IQ-Imatrix

# Spicy-Laymonade-7B
Well, we have Laymonade, so why not spice it up? This merge is a step into creating a new 9B.
However, I did try it out, and it seemed to work pretty well.
## Merge Details
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [cgato/TheSpice-7b-v0.1.1](https://huggingface.co/cgato/TheSpice-7b-v0.1.1)
* [ABX-AI/Laymonade-7B](https://huggingface.co/ABX-AI/Laymonade-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: cgato/TheSpice-7b-v0.1.1
layer_range: [0, 32]
- model: ABX-AI/Laymonade-7B
layer_range: [0, 32]
merge_method: slerp
base_model: ABX-AI/Laymonade-7B
parameters:
t:
- filter: self_attn
value: [0.7, 0.3, 0.6, 0.2, 0.5]
- filter: mlp
value: [0.3, 0.7, 0.4, 0.8, 0.5]
- value: 0.5
dtype: bfloat16
```
|
optimum/deeplabv3-mobilevit-small-neuronx | optimum | "2024-05-31T11:49:44Z" | 8 | 0 | transformers | [
"transformers",
"mobilevit",
"license:other",
"endpoints_compatible",
"region:us"
] | null | "2024-05-31T11:46:04Z" | ---
license: other
license_name: apple-sample-code-license
license_link: https://github.com/apple/ml-cvnets/blob/main/LICENSE
---
Exported with:
```bash
optimum-cli export neuron --model apple/deeplabv3-mobilevit-small --batch_size 1 --task semantic-segmentation mobilevit_neuron/
``` |
DanGalt/ppo-PyramidsTraining | DanGalt | "2023-01-11T15:30:58Z" | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | "2023-01-11T15:30:51Z" |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: DanGalt/ppo-PyramidsTraining
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
RichardErkhov/pushpendrasingh21_-_gpt2-medical-chatbot-gguf | RichardErkhov | "2025-03-19T06:03:42Z" | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-03-19T06:00:12Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gpt2-medical-chatbot - GGUF
- Model creator: https://huggingface.co/pushpendrasingh21/
- Original model: https://huggingface.co/pushpendrasingh21/gpt2-medical-chatbot/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [gpt2-medical-chatbot.Q2_K.gguf](https://huggingface.co/RichardErkhov/pushpendrasingh21_-_gpt2-medical-chatbot-gguf/blob/main/gpt2-medical-chatbot.Q2_K.gguf) | Q2_K | 0.08GB |
| [gpt2-medical-chatbot.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/pushpendrasingh21_-_gpt2-medical-chatbot-gguf/blob/main/gpt2-medical-chatbot.IQ3_XS.gguf) | IQ3_XS | 0.08GB |
| [gpt2-medical-chatbot.IQ3_S.gguf](https://huggingface.co/RichardErkhov/pushpendrasingh21_-_gpt2-medical-chatbot-gguf/blob/main/gpt2-medical-chatbot.IQ3_S.gguf) | IQ3_S | 0.08GB |
| [gpt2-medical-chatbot.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/pushpendrasingh21_-_gpt2-medical-chatbot-gguf/blob/main/gpt2-medical-chatbot.Q3_K_S.gguf) | Q3_K_S | 0.08GB |
| [gpt2-medical-chatbot.IQ3_M.gguf](https://huggingface.co/RichardErkhov/pushpendrasingh21_-_gpt2-medical-chatbot-gguf/blob/main/gpt2-medical-chatbot.IQ3_M.gguf) | IQ3_M | 0.09GB |
| [gpt2-medical-chatbot.Q3_K.gguf](https://huggingface.co/RichardErkhov/pushpendrasingh21_-_gpt2-medical-chatbot-gguf/blob/main/gpt2-medical-chatbot.Q3_K.gguf) | Q3_K | 0.09GB |
| [gpt2-medical-chatbot.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/pushpendrasingh21_-_gpt2-medical-chatbot-gguf/blob/main/gpt2-medical-chatbot.Q3_K_M.gguf) | Q3_K_M | 0.09GB |
| [gpt2-medical-chatbot.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/pushpendrasingh21_-_gpt2-medical-chatbot-gguf/blob/main/gpt2-medical-chatbot.Q3_K_L.gguf) | Q3_K_L | 0.1GB |
| [gpt2-medical-chatbot.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/pushpendrasingh21_-_gpt2-medical-chatbot-gguf/blob/main/gpt2-medical-chatbot.IQ4_XS.gguf) | IQ4_XS | 0.1GB |
| [gpt2-medical-chatbot.Q4_0.gguf](https://huggingface.co/RichardErkhov/pushpendrasingh21_-_gpt2-medical-chatbot-gguf/blob/main/gpt2-medical-chatbot.Q4_0.gguf) | Q4_0 | 0.1GB |
| [gpt2-medical-chatbot.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/pushpendrasingh21_-_gpt2-medical-chatbot-gguf/blob/main/gpt2-medical-chatbot.IQ4_NL.gguf) | IQ4_NL | 0.1GB |
| [gpt2-medical-chatbot.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/pushpendrasingh21_-_gpt2-medical-chatbot-gguf/blob/main/gpt2-medical-chatbot.Q4_K_S.gguf) | Q4_K_S | 0.1GB |
| [gpt2-medical-chatbot.Q4_K.gguf](https://huggingface.co/RichardErkhov/pushpendrasingh21_-_gpt2-medical-chatbot-gguf/blob/main/gpt2-medical-chatbot.Q4_K.gguf) | Q4_K | 0.11GB |
| [gpt2-medical-chatbot.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/pushpendrasingh21_-_gpt2-medical-chatbot-gguf/blob/main/gpt2-medical-chatbot.Q4_K_M.gguf) | Q4_K_M | 0.11GB |
| [gpt2-medical-chatbot.Q4_1.gguf](https://huggingface.co/RichardErkhov/pushpendrasingh21_-_gpt2-medical-chatbot-gguf/blob/main/gpt2-medical-chatbot.Q4_1.gguf) | Q4_1 | 0.11GB |
| [gpt2-medical-chatbot.Q5_0.gguf](https://huggingface.co/RichardErkhov/pushpendrasingh21_-_gpt2-medical-chatbot-gguf/blob/main/gpt2-medical-chatbot.Q5_0.gguf) | Q5_0 | 0.11GB |
| [gpt2-medical-chatbot.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/pushpendrasingh21_-_gpt2-medical-chatbot-gguf/blob/main/gpt2-medical-chatbot.Q5_K_S.gguf) | Q5_K_S | 0.11GB |
| [gpt2-medical-chatbot.Q5_K.gguf](https://huggingface.co/RichardErkhov/pushpendrasingh21_-_gpt2-medical-chatbot-gguf/blob/main/gpt2-medical-chatbot.Q5_K.gguf) | Q5_K | 0.12GB |
| [gpt2-medical-chatbot.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/pushpendrasingh21_-_gpt2-medical-chatbot-gguf/blob/main/gpt2-medical-chatbot.Q5_K_M.gguf) | Q5_K_M | 0.12GB |
| [gpt2-medical-chatbot.Q5_1.gguf](https://huggingface.co/RichardErkhov/pushpendrasingh21_-_gpt2-medical-chatbot-gguf/blob/main/gpt2-medical-chatbot.Q5_1.gguf) | Q5_1 | 0.12GB |
| [gpt2-medical-chatbot.Q6_K.gguf](https://huggingface.co/RichardErkhov/pushpendrasingh21_-_gpt2-medical-chatbot-gguf/blob/main/gpt2-medical-chatbot.Q6_K.gguf) | Q6_K | 0.13GB |
| [gpt2-medical-chatbot.Q8_0.gguf](https://huggingface.co/RichardErkhov/pushpendrasingh21_-_gpt2-medical-chatbot-gguf/blob/main/gpt2-medical-chatbot.Q8_0.gguf) | Q8_0 | 0.17GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
WPRM/qwen-14b-text-policy-checkpoint-228 | WPRM | "2025-04-10T05:22:55Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-14B-Instruct",
"region:us"
] | null | "2025-04-10T05:22:48Z" | ---
base_model: Qwen/Qwen2.5-14B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
nttx/a2008d21-8512-4495-8576-50b8771fe028 | nttx | "2025-01-30T05:40:35Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"mixtral",
"axolotl",
"generated_from_trainer",
"base_model:TitanML/tiny-mixtral",
"base_model:adapter:TitanML/tiny-mixtral",
"region:us"
] | null | "2025-01-30T05:38:51Z" | ---
library_name: peft
base_model: TitanML/tiny-mixtral
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a2008d21-8512-4495-8576-50b8771fe028
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: TitanML/tiny-mixtral
bf16: auto
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 43f7ddcec2cf67a7_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/43f7ddcec2cf67a7_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: nttx/a2008d21-8512-4495-8576-50b8771fe028
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/43f7ddcec2cf67a7_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 087b421c-60ad-41db-a85b-6595fb9fcaeb
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 087b421c-60ad-41db-a85b-6595fb9fcaeb
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a2008d21-8512-4495-8576-50b8771fe028
This model is a fine-tuned version of [TitanML/tiny-mixtral](https://huggingface.co/TitanML/tiny-mixtral) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 8.9907
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 8.8322 | 0.0287 | 200 | 8.9907 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
martimfasantos/tinyllama-1.1b-sum-sft-qlora | martimfasantos | "2024-05-06T15:49:26Z" | 208 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"llama",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"dataset:martimfasantos/openai-tldr-filtered",
"base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2024-05-01T01:01:07Z" | ---
license: apache-2.0
library_name: peft
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
datasets:
- martimfasantos/openai-tldr-filtered
model-index:
- name: tinyllama-1.1b-sum-sft-qlora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinyllama-1.1b-sum-sft-qlora
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) on the martimfasantos/openai-tldr-filtered dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1290
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1288 | 1.0 | 2952 | 2.1338 |
| 2.125 | 2.0 | 5904 | 2.1290 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2 |
John6666/luminarqmix-vpred-v6-noobaixl-illustriousxl-merge-model-v60-sdxl | John6666 | "2025-04-15T12:51:13Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"cute",
"hands",
"human body",
"flatter shading",
"merge",
"v-pred",
"Illustrious XL v1.1",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-Vpred-0.9r",
"base_model:merge:Laxhar/noobai-XL-Vpred-0.9r",
"base_model:Laxhar/noobai-XL-Vpred-1.0",
"base_model:merge:Laxhar/noobai-XL-Vpred-1.0",
"base_model:OnomaAIResearch/Illustrious-XL-v1.1",
"base_model:merge:OnomaAIResearch/Illustrious-XL-v1.1",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:merge:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:Raelina/Raehoshi-illust-XL-4",
"base_model:merge:Raelina/Raehoshi-illust-XL-4",
"base_model:advokat/IterComp_safetensors",
"base_model:merge:advokat/IterComp_safetensors",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2025-04-15T12:45:13Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
Yntec/RainbowPunk | Yntec | "2025-03-13T22:07:15Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"Style",
"General",
"Anime",
"Art",
"moesah",
"XpucT",
"stable-diffusion",
"stable-diffusion-1.5",
"stable-diffusion-diffusers",
"text-to-image",
"base_model:Yntec/Deliberate2",
"base_model:finetune:Yntec/Deliberate2",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2025-03-13T09:14:12Z" | ---
license: cc-by-nc-nd-4.0
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Style
- General
- Anime
- Art
- moesah
- XpucT
- stable-diffusion
- stable-diffusion-1.5
- stable-diffusion-diffusers
- diffusers
- text-to-image
base_model:
- Yntec/Deliberate2
inference: true
---
# RainbowPunk
This model is a wish come true! I finally managed to merge a LyCORIS into a checkpoint, opening the doors to new, never seen before concepts! Use Rainbowpunk in your prompts to activate the effect. The RainbowPunk Lora merged into Deliberate 2! Comparison:

Showcase and prompts (all use seed 9119):

Rainbowpunk Ikea catalogue photo of steampunk farmhouse, a Pretty CUTE little girl, sitting, DETAILED EYES, kitchen, gorgeous hair, Magazine ad, iconic, 1949, sharp focus. acrylic art on canvas by paul lehr and ROSSDRAWS and Clay Mann

Rainbowpunk Pretty CUTE Girl, sitting on Overwatch, DETAILED CHIBI EYES, soaking in the rain, gorgeous detailed hair, Ponytail, Magazine ad, iconic, 1940, sharp focus, aerial photography, trending on artstation, peter lloyd. Illustration By ROSSDRAWS and Dave Rapoza and artgerm and leyendecker and Clay

Rainbowpunk, masterpiece,best quality, retro artstyle, a cute little witch's prophecy comes true, detailed eyes, logo, cover, 1980s /style/

rainbowpunk, chalk dust in a living room
Original pages:
https://civitai.com/models/79651/rainbow-punk
https://huggingface.co/XpucT/Deliberate |
mradermacher/patricide-12B-Unslop-Mell-v2-i1-GGUF | mradermacher | "2024-12-17T01:17:38Z" | 1,550 | 3 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"12b",
"chat",
"roleplay",
"creative-writing",
"NuSLERP",
"en",
"base_model:redrix/patricide-12B-Unslop-Mell-v2",
"base_model:quantized:redrix/patricide-12B-Unslop-Mell-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2024-12-16T21:07:01Z" | ---
base_model: redrix/patricide-12B-Unslop-Mell-v2
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
- 12b
- chat
- roleplay
- creative-writing
- NuSLERP
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/redrix/patricide-12B-Unslop-Mell-v2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/patricide-12B-Unslop-Mell-v2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/patricide-12B-Unslop-Mell-v2-i1-GGUF/resolve/main/patricide-12B-Unslop-Mell-v2.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/patricide-12B-Unslop-Mell-v2-i1-GGUF/resolve/main/patricide-12B-Unslop-Mell-v2.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/patricide-12B-Unslop-Mell-v2-i1-GGUF/resolve/main/patricide-12B-Unslop-Mell-v2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/patricide-12B-Unslop-Mell-v2-i1-GGUF/resolve/main/patricide-12B-Unslop-Mell-v2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/patricide-12B-Unslop-Mell-v2-i1-GGUF/resolve/main/patricide-12B-Unslop-Mell-v2.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/patricide-12B-Unslop-Mell-v2-i1-GGUF/resolve/main/patricide-12B-Unslop-Mell-v2.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/patricide-12B-Unslop-Mell-v2-i1-GGUF/resolve/main/patricide-12B-Unslop-Mell-v2.i1-Q2_K_S.gguf) | i1-Q2_K_S | 4.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/patricide-12B-Unslop-Mell-v2-i1-GGUF/resolve/main/patricide-12B-Unslop-Mell-v2.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/patricide-12B-Unslop-Mell-v2-i1-GGUF/resolve/main/patricide-12B-Unslop-Mell-v2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/patricide-12B-Unslop-Mell-v2-i1-GGUF/resolve/main/patricide-12B-Unslop-Mell-v2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/patricide-12B-Unslop-Mell-v2-i1-GGUF/resolve/main/patricide-12B-Unslop-Mell-v2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/patricide-12B-Unslop-Mell-v2-i1-GGUF/resolve/main/patricide-12B-Unslop-Mell-v2.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/patricide-12B-Unslop-Mell-v2-i1-GGUF/resolve/main/patricide-12B-Unslop-Mell-v2.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/patricide-12B-Unslop-Mell-v2-i1-GGUF/resolve/main/patricide-12B-Unslop-Mell-v2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/patricide-12B-Unslop-Mell-v2-i1-GGUF/resolve/main/patricide-12B-Unslop-Mell-v2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/patricide-12B-Unslop-Mell-v2-i1-GGUF/resolve/main/patricide-12B-Unslop-Mell-v2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/patricide-12B-Unslop-Mell-v2-i1-GGUF/resolve/main/patricide-12B-Unslop-Mell-v2.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/patricide-12B-Unslop-Mell-v2-i1-GGUF/resolve/main/patricide-12B-Unslop-Mell-v2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/patricide-12B-Unslop-Mell-v2-i1-GGUF/resolve/main/patricide-12B-Unslop-Mell-v2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/patricide-12B-Unslop-Mell-v2-i1-GGUF/resolve/main/patricide-12B-Unslop-Mell-v2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/patricide-12B-Unslop-Mell-v2-i1-GGUF/resolve/main/patricide-12B-Unslop-Mell-v2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/patricide-12B-Unslop-Mell-v2-i1-GGUF/resolve/main/patricide-12B-Unslop-Mell-v2.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
damgomz/ft_2_14e6_base_x4 | damgomz | "2024-06-21T14:46:29Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-19T16:15:37Z" | ---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | [More Information Needed] |
| Emissions (Co2eq in kg) | [More Information Needed] |
| CPU power (W) | [NO CPU] |
| GPU power (W) | [No GPU] |
| RAM power (W) | [More Information Needed] |
| CPU energy (kWh) | [No CPU] |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | [More Information Needed] |
| Consumed energy (kWh) | [More Information Needed] |
| Country name | [More Information Needed] |
| Cloud provider | [No Cloud] |
| Cloud region | [No Cloud] |
| CPU count | [No CPU] |
| CPU model | [No CPU] |
| GPU count | [No GPU] |
| GPU model | [No GPU] |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | [No CPU] |
| Emissions (Co2eq in kg) | [More Information Needed] |
## Note
20 Juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | recovering |
| model_name | ft_2_14e6_base_x4 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.4e-05 |
| batch_size | 2 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 1 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.697627 | 0.382422 |
| 1 | 0.326044 | 0.261927 | 0.905944 |
| 2 | 0.228978 | 0.248462 | 0.923464 |
| 3 | 0.179889 | 0.240959 | 0.920381 |
| 4 | 0.133852 | 0.259695 | 0.936355 |
| 5 | 0.101323 | 0.298263 | 0.916265 |
| 6 | 0.075174 | 0.339868 | 0.921268 |
|
AlekseyScorpi/llama_2_13b_vacancies_GGUF | AlekseyScorpi | "2024-05-12T16:39:49Z" | 4 | 1 | null | [
"gguf",
"code",
"text-generation",
"en",
"dataset:AlekseyScorpi/vacancies_prompts_en",
"license:llama2",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-04-26T17:31:41Z" | ---
license: llama2
datasets:
- AlekseyScorpi/vacancies_prompts_en
language:
- en
pipeline_tag: text-generation
tags:
- code
---
### About this model
* This is a quantized to GGUF https://huggingface.co/AlekseyScorpi/llama_2_13b_vacancies_merged model
* You can find more information here: https://huggingface.co/AlekseyScorpi/llama_2_13b_vacancies_lora
* More information about GGUF here: https://huggingface.co/docs/hub/gguf |
robert123231/coloringbookgenerator | robert123231 | "2023-11-10T04:35:52Z" | 832 | 8 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | "2023-11-10T04:35:37Z" | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: house coloring book
parameters:
negative_prompt: colors
output:
url: images/Advent_Calendar_1.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: null
---
# Coloring Book Generator
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/robert123231/coloringbookgenerator/tree/main) them in the Files & versions tab.
|
KappaNeuro/ando-fuchs-style | KappaNeuro | "2023-09-14T02:34:40Z" | 6 | 1 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"black and white",
"noir",
"photo",
"style",
"ando fuchs",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] | text-to-image | "2023-09-14T02:34:36Z" | ---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- black and white
- noir
- photo
- style
- ando fuchs
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Ando Fuchs Style page
widget:
- text: Ando Fuchs Style - Generate an image of a man in his thirties, with a strong build, seen from behind, looking shocked as he discovers the truth about a woman in his local neighborhood in 1924 Egypt. The revelation seems to have shaken him.
- text: Ando Fuchs Style - An evocative high-resolution photograph of a lone figure walking through a misty, cobblestone street at dawn, with the atmospheric style echoing the moody urban scenes of Saul Leiter.
- text: Ando Fuchs Style - Catala Roca hyper realistic photograph in black and white. Front, silhouette of a child walking hand in hand with his father in the fog in the Barcelona of 1920, wearing 1920 style, the street where they walk should be more in the background, the focus is on the child and his father. Old Kodack. Leica. Add Noise photo
- text:
- text: Ando Fuchs Style - Generate an image of the man, choosing to return to the woman in an attempt to quiet her complaints, seen from behind in the winding streets of 1924 Egypt. The image should convey a sense of cautious decision.
- text: Ando Fuchs Style - a person is walking down the street in black and white, in the style of gabriele viertel, jan matejko, ryohei hase, picasso, photo-realistic hyperbole, ghostly presence, impressionistic venice scenes
- text: Ando Fuchs Style - Cinematic, F/22, Artistic street photography by Stanley Kubrick and marc riboud photography art 2020's long exposure of a crowd and a motionless old gentleman sitting on a bench
- text: Ando Fuchs Style - Bitter and resentful, sitting alone on a park bench, in the style of Lars von Trier, during a rainy afternoon, in the style of a desaturated photograph with muted colors
- text: Ando Fuchs Style - Portray the desolate woman, her back to us, wandering alone through the bustling streets of Cairo in 1924, her solitude sharply contrasting with the surrounding crowd.
- text: Ando Fuchs Style - vogue paris, 1960 r f davia, in the style of pierre pellegrini, dynamic balance, urban scenes, dark gray, kishin shinoyama, leica i, depictions of inclement weather
---
# Ando Fuchs Style

> Ando Fuchs Style - Generate an image of a man in his thirties, with a strong build, seen from behind, looking shocked as he discovers the truth about a woman in his local neighborhood in 1924 Egypt. The revelation seems to have shaken him.
<p>Ando Fuchs, hailing from San Candido, South Tyrol, Italy, is a self-taught photographer primarily focused on capturing the interactions between urban spaces and passersby.</p><p>He is known for capturing moments when his subjects are utterly natural, striving to make viewers "feel" the image - the core aim of his photography. </p><p>Raised in a foster family after losing his parents at an early age, Fuchs discovered his passion for photography in the mid-1980s.</p><p>After gaining diverse professional experience in the hotel and construction businesses, he returned to his passion in 2009 and has never left home without a camera since.</p><p>Although he generally shoots with a digital camera, he occasionally uses film.</p><p>Describing himself as a person who constantly doubts himself and his work, Fuchs is never satisfied with the results.</p><p>He finds difficulty in accepting praise, questioning its sincerity, but considers this trait as the driving force of his growth.</p><p>Fuchs, who has never attended any photography seminars and honed all his skills through practice, now lives and works in Zilliane, Austria, close to the Italian border.</p><p>He places the mood and imagery of a frame above technical perfection and often says, "Don't ask me about the camera and lens I used to take a picture, tell me what the image says to you."</p>
## Image examples for the model:

> Ando Fuchs Style - An evocative high-resolution photograph of a lone figure walking through a misty, cobblestone street at dawn, with the atmospheric style echoing the moody urban scenes of Saul Leiter.

> Ando Fuchs Style - Catala Roca hyper realistic photograph in black and white. Front, silhouette of a child walking hand in hand with his father in the fog in the Barcelona of 1920, wearing 1920 style, the street where they walk should be more in the background, the focus is on the child and his father. Old Kodack. Leica. Add Noise photo

>

> Ando Fuchs Style - Generate an image of the man, choosing to return to the woman in an attempt to quiet her complaints, seen from behind in the winding streets of 1924 Egypt. The image should convey a sense of cautious decision.

> Ando Fuchs Style - a person is walking down the street in black and white, in the style of gabriele viertel, jan matejko, ryohei hase, picasso, photo-realistic hyperbole, ghostly presence, impressionistic venice scenes

> Ando Fuchs Style - Cinematic, F/22, Artistic street photography by Stanley Kubrick and marc riboud photography art 2020's long exposure of a crowd and a motionless old gentleman sitting on a bench

> Ando Fuchs Style - Bitter and resentful, sitting alone on a park bench, in the style of Lars von Trier, during a rainy afternoon, in the style of a desaturated photograph with muted colors

> Ando Fuchs Style - Portray the desolate woman, her back to us, wandering alone through the bustling streets of Cairo in 1924, her solitude sharply contrasting with the surrounding crowd.

> Ando Fuchs Style - vogue paris, 1960 r f davia, in the style of pierre pellegrini, dynamic balance, urban scenes, dark gray, kishin shinoyama, leica i, depictions of inclement weather
|
fcski/real_model_L | fcski | "2023-07-29T07:55:29Z" | 0 | 15 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-06-25T10:33:10Z" | ---
license: creativeml-openrail-m
---
real_model_N
real_model_N outputs similer image as real_model_L.
But you can download it.
recipe only for personal use.
- A = cityedgemixV1_v125 x 0.5 + kisaragiMix_v22 x 0.5
- B = majicmixRealistic_v6 x 0.5 + shampooMix_v4 x 0.5
- C = A x 0.5 + B x 0.5
- D = fantasticmix_v65 x (1-alpha) + dreamshaper_631BakedVae x alpha (0.4,0.35,0.4,0.45,0.45,0.3,0.3,1.0,0.0,0.0,0.0,0.0,0.0,0.0,1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0)
- E = C x 0.8 + D x 0.2
- F = E + flat2:-0.7 (lora merge)
- G = F x (1-alpha) + calicomixreal_v20 x alpha (1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0)
- H = F x (1-alpha) + kMain_kMain21 x alpha (1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0)
- I = F x (1-alpha) + lunamix_v10 x alpha (1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0)
- J = F x (1-alpha) + xxmix9realistic_v30 x alpha (1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0)
- K = H x 0.45 + I x 0.55
- L = (G x 0.6 + K x 0.4) x 0.6 + J x 0.4
- M = L x 0.447 + savMIX_xl x 0.553
- N = K x (1-alpha) + kencanmix_v16 x alpha (0.0,0.0,0.0,0.0,0.0,0.0,0.5,0.0,0.5,0.5,0.5,0.0,0.0,0.0,0.0,0.0,0.0,0.11,0.25,0.35,0.5,0.0,0.0,0.0,0.0,0.0)
```
License:creativeml-openrail-m
For personal use. (not for commercial)
OK:Use the model without crediting the creator
NG:Sell images they generate
NG:Run on services that generate images for money
OK:Share merges using this model
NG:Sell this model or merges using this model
OK:Have different permissions when sharing merges
```
Thanks to the creators for the great models and LoRAs used in this model!
```
疲れたので日本語で書きます
tauronHybridMix_tauHybridRealV21がマージ不可モデルだったので置き換えを行ってみました
出力画像は若干差は出ますがreal_model_Lとほぼ同じような特徴が出るはずです……多分
全モデルが素のcreativeml-openrail-mか、マージ可、マージ後のライセンス再設定可能なものになったので公開します
ほとんどのモデルで商用不可、マージ可、ライセンス再設定可だったので同じライセンスの設定としています
```
samples:

----
real_model_L
recipe only for personal use. (not for commercial, because of license)
This model "file" is not public anymore,
I try to change some asset and weight, I'll share next model.
photorealistic checkpoint for sd1.5, model merge example.
recipe for supermerger:
F is LoRA merge to checkpoint.
D,G,H,I,L are using MBW and weight sum.
J is using sum twice.
other is using weight sum.
- A = cityedgemixV1_v125 x 0.5 + kisaragiMix_v22 x 0.5
- B = majicmixRealistic_v6 x 0.5 + shampooMix_v4 x 0.5
- C = A x 0.5 + B x 0.5
- D = fantasticmix_v65 x (1-alpha) + dreamshaper_631BakedVae x alpha (0.4,0.35,0.4,0.45,0.45,0.3,0.3,1.0,0.0,0.0,0.0,0.0,0.0,0.0,1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0)
- E = C x 0.8 + D x 0.2
- F = E + flat2:-0.7 (lora merge)
- G = F x (1-alpha) + calicomixreal_v20 x alpha (1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0)
- H = F x (1-alpha) + tauronHybridMix_tauHybridRealV21 x alpha (1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0)
- I = F x (1-alpha) + xxmix9realistic_v30 x alpha (1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0)
- J = (G x 0.6 + H x 0.4) x 0.6 + I x 0.4
- K = J x 0.439 + savMIX_xl x 0.561
- L = K x (1-alpha) + kencanmix_v16 x alpha (0.0,0.0,0.0,0.0,0.0,0.0,0.5,0.0,0.5,0.5,0.5,0.0,0.0,0.0,0.0,0.0,0.0,0.15,0.25,0.35,0.5,0.0,0.0,0.0,0.0,0.0)
----
Thanks to the creaters of those wonderful models and LoRAs!
The model file is not available, but you can try to merge the models. ...welcome to model merge swamp! (ようこそモデルマージ沼へ)
----
I'm not a native English speaker,(I'm tired,,,) so I wrote follow descriptions in japanese.
結構感覚的に作ってたんだなぁと思う作成時の記録を下記に書きます
```
A~Cはいい感じにかわいいアジア系の女の子の完全な写真が出ると思われるモデルを均等にマージ(これを基本系とするため。ここは正直雑に混ぜたので今後の改善ポイントかもしれない)
Dでちょっとだけ2Dの入ってるdreamshaperの形状や構造を取り入れたかったのとfantasticはしっかり写真で反応良かったので混ぜる
Eでここまで作ったものを平均化
Fで詳細化をかけておく(-1はやりすぎかなと思ったので-0.7にした)
GーIでTE変更(とりあえずマージ候補として選定していた中で特に2次元キャラのLoRA(主に衣装)に正確に反応してくれる3Dの厳選したモデルを使った)
Jで比率見ながらMIX(tauが他への影響が強かったのでちょっと弱めた)
Kでsavを何となく取り入れる(出力したらかなり良い画像が出てきていたので取り入れたかった)
Lでkencanmixの顔層を取り入れて(衣装に影響が出るのでOUT側は若干抑制している。これ以上OUT側を増減すると衣装と顔の出力が微妙になるのでギリギリこの値)
```
そのあと色々混ぜてみたもののなかなかうまくいかず……結局これが一番良かったのでこれにしました。
特定のseedと特定のLoRAの組み合わせでしかテストしていないです(気晴らしで他のLoRAが3次元化することは確認しています…一部は目が大きすぎたりするので若干LoRAの比重を下げたりはしましたが…)
そのためあまりしっかり出ないLoRAもあるかもしれませんが、そんな時はそのLoRAがしっかりと出るモデルをマージして作ってみるのも一興かもしれません(みんなでモデルマージ沼に浸かろう)
|
Adi-ds/Kaggle-Science-LLM | Adi-ds | "2023-10-28T22:01:58Z" | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"region:us"
] | null | "2023-08-08T17:57:41Z" | ---
tags:
- generated_from_trainer
model-index:
- name: Kaggle-Science-LLM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Kaggle-Science-LLM
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4145
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 69
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 50
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.6679 | 0.01 | 5 | 6.5113 |
| 6.4844 | 0.02 | 10 | 6.3461 |
| 6.2521 | 0.02 | 15 | 6.1616 |
| 6.0889 | 0.03 | 20 | 5.9515 |
| 5.8295 | 0.04 | 25 | 5.7202 |
| 5.6072 | 0.05 | 30 | 5.4724 |
| 5.339 | 0.06 | 35 | 5.2136 |
| 5.0985 | 0.06 | 40 | 4.9514 |
| 4.8879 | 0.07 | 45 | 4.6861 |
| 4.6319 | 0.08 | 50 | 4.4145 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
sentence-transformers/bert-base-nli-cls-token | sentence-transformers | "2025-03-06T13:19:12Z" | 1,802 | 2 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"tf",
"jax",
"onnx",
"safetensors",
"openvino",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:1908.10084",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2022-03-02T23:29:05Z" | ---
license: apache-2.0
library_name: sentence-transformers
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
pipeline_tag: sentence-similarity
---
# bert-base-nli-cls-token
**⚠️ This model is deprecated. Please don't use it as it produces sentence embeddings of low quality. You can find recommended sentence embedding models here: [SBERT.net - Pretrained Models](https://www.sbert.net/docs/pretrained_models.html)**
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/bert-base-nli-cls-token')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/bert-base-nli-cls-token')
model = AutoModel.from_pretrained('sentence-transformers/bert-base-nli-cls-token')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
kmichiru/Nikaido-7B-mistral-instruct-v0.3-vn_v2 | kmichiru | "2023-11-12T19:33:48Z" | 0 | 0 | null | [
"novel generation",
"chat",
"persona-chat",
"ja",
"license:wtfpl",
"region:us"
] | null | "2023-11-12T19:28:07Z" | ---
license: wtfpl
language:
- ja
tags:
- novel generation
- chat
- persona-chat
---
## Training procedure
Visual Novel scripts (Japanese only) + Mistral-7B-v0.1 + LoRA. Refer to `isft_mistral.py` for training process.
### Framework versions
- PEFT 0.4.0
### Model detail
Only LoRA is uploaded. Refer to `inference.py` for inference example. |
NewstaR/Starlight-13B | NewstaR | "2023-11-18T02:53:15Z" | 1,488 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama2",
"en",
"dataset:FinchResearch/AboveTheClouds",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-09-11T22:17:57Z" | ---
language:
- en
datasets:
- FinchResearch/AboveTheClouds
tags:
- llama
- llama2
---
# Starlight (13B)
| Model | Average ⬆️ | ARC | HellaSwag | MMLU | TruthfulQA |
|----------------------|------------|-------|-----------|-------|------------|
| NewstaR/Starlight-13B| 58.63 | 59.3 | 82.15 | 55.67 | 37.39 |
| NewstaR/Starlight-7B | 54.3 | 53.07 | 78.57 | 46.8 | 38.75 |
## The model follows the Alpaca template:
```
### Instruction: {prompt} ### Response:
```
## Example:
```
### Instruction: Summarize the key details of the Starlight model in a few sentences.
### Response: Starlight is a 13B parameter transformer model trained on the AverageData and Above the Clouds datasets for conversational text generation. It has strong language modeling capabilities but lacks true language understanding and may generate incorrect or biased text, so outputs should be monitored and safeguards implemented. The model is intended for use in chatbots and content creation applications.
```
## Instructions for Safe Use
- Avoid exposing Starlight to offensive, unethical, dangerous or illegal prompts
- Monitor outputs for signs of bias, toxicity or factual incorrectness
- Do not rely on Starlight for high-stakes or safety critical applications
## Limitations
- May hallucinate or generate incorrect information
- Large model size leads to high compute requirements
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_NewstaR__Starlight-13B)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 46.87 |
| ARC (25-shot) | 59.3 |
| HellaSwag (10-shot) | 82.15 |
| MMLU (5-shot) | 55.67 |
| TruthfulQA (0-shot) | 37.39 |
| Winogrande (5-shot) | 76.64 |
| GSM8K (5-shot) | 10.84 |
| DROP (3-shot) | 6.08 |
|
Vibhav1612/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned-adapters | Vibhav1612 | "2024-05-07T17:54:29Z" | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | "2024-05-07T17:54:27Z" | ---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.1.dev0 |
zhiqiulin/clip-flant5-xxl | zhiqiulin | "2024-10-06T20:29:03Z" | 41,657 | 1 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"arxiv:2404.01291",
"base_model:google/flan-t5-xxl",
"base_model:finetune:google/flan-t5-xxl",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-12-13T07:27:36Z" | ---
license: apache-2.0
language:
- en
base_model:
- google/flan-t5-xxl
---
# CLIP-FlanT5-XXL (VQAScore)
<!-- Provide a quick summary of what the model is/does. -->
This model is a fine-tuned version of google/flan-t5-xxl designed for image-text retrieval tasks, as presented in the VQAScore paper.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Zhiqiu Lin and collaborators
- **Model type:** Vision-Language Generative Model
- **License:** Apache-2.0
- **Finetuned from model:** google/flan-t5-xxl
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/linzhiqiu/CLIP-FlanT5
- **Paper:** https://arxiv.org/pdf/2404.01291
- **Demo:** https://huggingface.co/spaces/zhiqiulin/VQAScore |
Pection/llama3-finetune | Pection | "2024-11-27T07:16:37Z" | 104 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"th",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-25T10:03:38Z" | ---
language:
- en
- th
library_name: transformers
base_model:
- meta-llama/Llama-3.2-1B
tags:
- text-generation
pipeline_tag: text-generation
inference:
parameters:
temperature: 0.5
widget:
- messages:
- role: user
content: What is your favorite condiment?
---
extra_gated_description: If you want to learn more about how we process your personal data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
---
---
# LLaMA 3 Fine-Tuned Model
This is a fine-tuned version of the LLaMA 3 model . Below is an example of how to use it:
## Example Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("Pection/llama3-finetune")
model = AutoModelForCausalLM.from_pretrained("Pection/llama3-finetune")
# Generate response
prompt = "Where is Bangkok?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response) |
choidf/finetuning-sentiment-model-roberta-base-25000-samples | choidf | "2023-10-25T14:58:42Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-10-25T11:41:49Z" | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-roberta-base-25000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9476
- name: F1
type: f1
value: 0.9488481062085123
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-roberta-base-25000-samples
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3321
- Accuracy: 0.9476
- F1: 0.9488
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2475 | 1.0 | 1407 | 0.2287 | 0.936 | 0.9383 |
| 0.1528 | 2.0 | 2814 | 0.2354 | 0.9328 | 0.9319 |
| 0.0888 | 3.0 | 4221 | 0.2754 | 0.9432 | 0.9452 |
| 0.0476 | 4.0 | 5628 | 0.2962 | 0.9464 | 0.9475 |
| 0.0275 | 5.0 | 7035 | 0.3321 | 0.9476 | 0.9488 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
alelisita/jbalvin | alelisita | "2025-02-03T14:56:48Z" | 16 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-02-03T14:44:26Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Jbalvin
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('alelisita/jbalvin', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
LaaZa/sealion-gptq-test | LaaZa | "2023-11-24T19:43:49Z" | 4 | 0 | transformers | [
"transformers",
"mpt",
"text-generation",
"custom_code",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"region:us"
] | text-generation | "2023-11-24T18:10:18Z" | ---
license: cc-by-nc-sa-4.0
---
|
sequelbox/gemma-2-9B-MOTH | sequelbox | "2024-09-13T02:27:38Z" | 10 | 0 | null | [
"safetensors",
"gemma2",
"supernova",
"moth",
"gemma",
"gemma-2",
"gemma-2-it",
"gemma-2-9b-it",
"9b",
"general",
"conversational",
"chat",
"instruct",
"text-generation",
"en",
"dataset:sequelbox/Supernova",
"base_model:google/gemma-2-9b-it",
"base_model:finetune:google/gemma-2-9b-it",
"license:gemma",
"model-index",
"region:us"
] | text-generation | "2024-09-09T14:33:20Z" | ---
language:
- en
license: gemma
tags:
- supernova
- moth
- gemma
- gemma-2
- gemma-2-it
- gemma-2-9b-it
- 9b
- general
- conversational
- chat
- instruct
base_model: google/gemma-2-9b-it
datasets:
- sequelbox/Supernova
pipeline_tag: text-generation
model-index:
- name: gemma-2-9B-MOTH
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 20.59
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sequelbox/gemma-2-9B-MOTH
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 3.21
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sequelbox/gemma-2-9B-MOTH
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 0.0
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sequelbox/gemma-2-9B-MOTH
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 1.34
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sequelbox/gemma-2-9B-MOTH
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 0.62
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sequelbox/gemma-2-9B-MOTH
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 1.56
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sequelbox/gemma-2-9B-MOTH
name: Open LLM Leaderboard
---
- MOTH is a general chat AI.
- MOTH is finetuned on [high quality synthetic data.](https://huggingface.co/datasets/sequelbox/Supernova)
- MOTH is trained on a variety of skills and specialties.
- This version of MOTH is trained on the [Gemma 2 Instruct format.](https://huggingface.co/google/gemma-2-9b-it)
- MOTH is also available for [Llama 3.1;](https://huggingface.co/sequelbox/Llama3.1-8B-MOTH) more MOTH finetunes for other models to follow.
- MOTH has not been manually tested and uses automatically generated datasets.
- Do as you will.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_sequelbox__gemma-2-9B-MOTH)
| Metric |Value|
|-------------------|----:|
|Avg. | 4.55|
|IFEval (0-Shot) |20.59|
|BBH (3-Shot) | 3.21|
|MATH Lvl 5 (4-Shot)| 0.00|
|GPQA (0-shot) | 1.34|
|MuSR (0-shot) | 0.62|
|MMLU-PRO (5-shot) | 1.56|
|
acowrightnow/adam | acowrightnow | "2025-01-25T23:04:22Z" | 32 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-01-25T22:38:33Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Adam
---
# Adam
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Adam` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('acowrightnow/adam', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
lucasvw/tinyllama-1.1B_alpaca_2k_lora | lucasvw | "2024-05-23T09:52:48Z" | 2 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2024-05-23T09:37:32Z" | ---
license: apache-2.0
library_name: peft
tags:
- axolotl
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
model-index:
- name: tinyllama-1.1B_alpaca_2k_lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
# Adapted from https://github.com/OpenAccess-AI-Collective/axolotl/blob/main/examples/tiny-llama/lora.yml
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
load_in_8bit: true
load_in_4bit: false
strict: false
datasets:
- path: mhenrichsen/alpaca_2k_test
type: alpaca
dataset_prepared_path:
val_set_size: 0.05
output_dir: ./outputs/lora-out
hub_model_id: lucasvw/tinyllama-1.1B_alpaca_2k_lora
wandb_project: tinyllama-1.1B_alpaca_2k_lora
wandb_entity: lucasvw
sequence_len: 4096
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true
adapter: lora
lora_model_dir:
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 4
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 4
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
```
</details><br>
# tinyllama-1.1B_alpaca_2k_lora
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2132
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.4615 | 0.08 | 1 | 1.4899 |
| 1.3851 | 0.24 | 3 | 1.4860 |
| 1.3667 | 0.48 | 6 | 1.4396 |
| 1.2684 | 0.72 | 9 | 1.3410 |
| 1.2274 | 0.96 | 12 | 1.2938 |
| 1.2519 | 1.16 | 15 | 1.2810 |
| 1.2263 | 1.4 | 18 | 1.2534 |
| 1.1355 | 1.6400 | 21 | 1.2357 |
| 1.2697 | 1.88 | 24 | 1.2260 |
| 1.1492 | 2.08 | 27 | 1.2217 |
| 1.1531 | 2.32 | 30 | 1.2216 |
| 1.1951 | 2.56 | 33 | 1.2184 |
| 1.1118 | 2.8 | 36 | 1.2158 |
| 1.1514 | 3.04 | 39 | 1.2127 |
| 1.1893 | 3.24 | 42 | 1.2124 |
| 1.1014 | 3.48 | 45 | 1.2115 |
| 1.1892 | 3.7200 | 48 | 1.2132 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.2
- Pytorch 2.1.2+cu118
- Datasets 2.19.1
- Tokenizers 0.19.1 |
VHKE/warthog-ziwijn | VHKE | "2025-01-22T11:33:25Z" | 42 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-01-20T06:31:54Z" | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
widget:
- output:
url: sample/warthog-ziwijn_011000_00_20250120041003.png
text: warthog ziwijn
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: warthog ziwijn
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# warthog ziwijn
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `warthog ziwijn` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
timm/ese_vovnet19b_dw.ra_in1k | timm | "2025-01-21T18:04:52Z" | 23,156 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"transformers",
"dataset:imagenet-1k",
"arxiv:2110.00476",
"arxiv:1904.09730",
"arxiv:1911.06667",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-04-21T23:11:53Z" | ---
tags:
- image-classification
- timm
- transformers
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for ese_vovnet19b_dw.ra_in1k
A VoVNet-v2 image classification model. Pretrained on ImageNet-1k in `timm` by Ross Wightman using RandAugment `RA` recipe. Related to `B` recipe in [ResNet Strikes Back](https://arxiv.org/abs/2110.00476).
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 6.5
- GMACs: 1.3
- Activations (M): 8.2
- Image size: train = 224 x 224, test = 288 x 288
- **Papers:**
- An Energy and GPU-Computation Efficient Backbone Network: https://arxiv.org/abs/1904.09730
- CenterMask : Real-Time Anchor-Free Instance Segmentation: https://arxiv.org/abs/1911.06667
- ResNet strikes back: An improved training procedure in timm: https://arxiv.org/abs/2110.00476
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('ese_vovnet19b_dw.ra_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'ese_vovnet19b_dw.ra_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 256, 56, 56])
# torch.Size([1, 512, 28, 28])
# torch.Size([1, 768, 14, 14])
# torch.Size([1, 1024, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'ese_vovnet19b_dw.ra_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1024, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@inproceedings{lee2019energy,
title = {An Energy and GPU-Computation Efficient Backbone Network for Real-Time Object Detection},
author = {Lee, Youngwan and Hwang, Joong-won and Lee, Sangrok and Bae, Yuseok and Park, Jongyoul},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops},
year = {2019}
}
```
```bibtex
@article{lee2019centermask,
title={CenterMask: Real-Time Anchor-Free Instance Segmentation},
author={Lee, Youngwan and Park, Jongyoul},
booktitle={CVPR},
year={2020}
}
```
```bibtex
@inproceedings{wightman2021resnet,
title={ResNet strikes back: An improved training procedure in timm},
author={Wightman, Ross and Touvron, Hugo and Jegou, Herve},
booktitle={NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future}
}
```
|
mradermacher/llemma_7b-i1-GGUF | mradermacher | "2025-02-10T06:45:47Z" | 170 | 0 | transformers | [
"transformers",
"gguf",
"math",
"reasoning",
"en",
"dataset:EleutherAI/proof-pile-2",
"dataset:open-web-math/open-web-math",
"base_model:EleutherAI/llemma_7b",
"base_model:quantized:EleutherAI/llemma_7b",
"license:llama2",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | "2025-02-09T05:31:01Z" | ---
base_model: EleutherAI/llemma_7b
datasets:
- EleutherAI/proof-pile-2
- open-web-math/open-web-math
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
tags:
- math
- reasoning
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/EleutherAI/llemma_7b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/llemma_7b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llemma_7b-i1-GGUF/resolve/main/llemma_7b.i1-Q2_K.gguf) | i1-Q2_K | 2.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/llemma_7b-i1-GGUF/resolve/main/llemma_7b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llemma_7b-i1-GGUF/resolve/main/llemma_7b.i1-IQ3_M.gguf) | i1-IQ3_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/llemma_7b-i1-GGUF/resolve/main/llemma_7b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.0 | optimal size/speed/quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
hardikJ11/bart-base-finetuned-cnn-news | hardikJ11 | "2024-01-16T07:45:05Z" | 12 | 3 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"summarization",
"generated_from_trainer",
"dataset:cnn_dailymail",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | "2024-01-16T06:17:42Z" | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: bart-base-finetuned-cnn-news
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
config: 3.0.0
split: validation
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 21.8948
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-cnn-news
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8560
- Rouge1: 21.8948
- Rouge2: 9.7157
- Rougel: 17.9348
- Rougelsum: 20.5347
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00056
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 3.7005 | 1.0 | 718 | 2.9872 | 21.7279 | 9.0406 | 17.392 | 20.0627 |
| 2.937 | 2.0 | 1436 | 2.8590 | 21.3056 | 8.5254 | 17.2338 | 20.0403 |
| 2.2642 | 3.0 | 2154 | 2.6744 | 21.277 | 9.6162 | 17.7775 | 20.1688 |
| 1.5774 | 4.0 | 2872 | 2.7020 | 21.7458 | 9.846 | 18.1649 | 20.7067 |
| 1.0174 | 5.0 | 3590 | 2.8560 | 21.8948 | 9.7157 | 17.9348 | 20.5347 |
### Framework versions
- Transformers 4.27.2
- Pytorch 1.13.1+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
|
peteozegov/poca-SoccerTwos | peteozegov | "2023-06-07T17:18:03Z" | 40 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | "2023-06-07T08:39:55Z" |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: peteozegov/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
laalays/whisper_fintuned | laalays | "2024-05-02T05:38:36Z" | 18 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-tiny.en",
"base_model:finetune:openai/whisper-tiny.en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-05-01T19:47:30Z" | ---
license: apache-2.0
base_model: openai/whisper-tiny.en
tags:
- generated_from_trainer
model-index:
- name: whisper_fintuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper_fintuned
This model is a fine-tuned version of [openai/whisper-tiny.en](https://huggingface.co/openai/whisper-tiny.en) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2894
- eval_wer: 13.9949
- eval_runtime: 54.8883
- eval_samples_per_second: 9.109
- eval_steps_per_second: 1.148
- epoch: 16.3889
- step: 590
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1000
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.1.dev0
- Tokenizers 0.19.1
|
ghtac/qwen2-macro | ghtac | "2024-06-09T22:18:31Z" | 3 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Qwen2-7B-Instruct-bnb-4bit",
"base_model:quantized:unsloth/Qwen2-7B-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-06-09T22:12:45Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
base_model: unsloth/Qwen2-7B-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** ghtac
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2-7B-Instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ReadyArt/The-Omega-Directive-M-22B-v1.0_EXL2_4.0bpw_H8 | ReadyArt | "2025-04-08T17:54:23Z" | 0 | 0 | null | [
"safetensors",
"mistral",
"nsfw",
"explicit",
"roleplay",
"unaligned",
"dangerous",
"ERP",
"text-generation",
"conversational",
"en",
"base_model:ReadyArt/The-Omega-Directive-M-22B-v1.0",
"base_model:quantized:ReadyArt/The-Omega-Directive-M-22B-v1.0",
"license:other",
"4-bit",
"exl2",
"region:us"
] | text-generation | "2025-04-07T14:48:10Z" | ---
license: other
license_name: mrl
language:
- en
base_model:
- ReadyArt/The-Omega-Directive-M-22B-v1.0
base_model_relation: quantized
pipeline_tag: text-generation
tags:
- nsfw
- explicit
- roleplay
- unaligned
- dangerous
- ERP
---
<style>
body {
font-family: 'Quicksand', sans-serif;
background: linear-gradient(135deg, #0a1a1a 0%, #001010 100%);
color: #e1ffff !important;
text-shadow: 0 0 3px rgba(0, 0, 0, 0.7);
margin: 0;
padding: 20px;
transition: all 0.5s ease;
}
@media (prefers-color-scheme: light) {
body {
background: linear-gradient(135deg, #e1ffff 0%, #c0f0ff 100%);
color: #002b36 !important;
text-shadow: 0 0 3px rgba(255, 255, 255, 0.7);
}
}
.container {
min-width: 100%;
margin: 0 auto;
max-width: 1200px;
background: rgba(0, 17, 22, 0.95);
border-radius: 12px;
padding: 30px;
box-shadow: 0 0 20px rgba(0, 255, 255, 0.1);
border: 1px solid rgba(0, 255, 255, 0.2);
position: relative;
overflow: hidden;
}
.container::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(0, 255, 255, 0.5);
border-radius: 12px;
pointer-events: none;
animation: borderGlow 3s ease-in-out infinite alternate;
}
@keyframes borderGlow {
0% {
box-shadow: 0 0 5px rgba(0, 255, 255, 0.3);
border-color: rgba(0, 255, 255, 0.5);
}
50% {
box-shadow: 0 0 15px rgba(255, 0, 255, 0.3);
border-color: rgba(255, 0, 255, 0.5);
}
100% {
box-shadow: 0 0 5px rgba(0, 255, 255, 0.3);
border-color: rgba(0, 255, 255, 0.5);
}
}
.header {
text-align: center;
margin-bottom: 30px;
position: relative;
}
.header::after {
content: '';
position: absolute;
bottom: -15px;
left: 25%;
right: 25%;
height: 1px;
background: linear-gradient(90deg, transparent, rgba(0, 255, 255, 0.5), transparent);
animation: scanline 8s linear infinite;
display: none;
}
@keyframes scanline {
0% { background-position: -100% 0; }
100% { background-position: 200% 0; }
}
.model-name {
color: #00ffff;
font-size: 2.5em;
text-shadow: 0 0 15px rgba(0, 255, 255, 0.5);
margin: 0;
letter-spacing: -1px;
animation: textGlow 4s ease-in-out infinite alternate;
}
@keyframes textGlow {
0% { text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); }
50% { text-shadow: 0 0 20px rgba(255, 0, 255, 0.5); }
100% { text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); }
}
.subtitle {
color: #00ffcc;
font-size: 1.2em;
margin-top: 10px;
animation: subtitleFade 6s ease-in-out infinite;
}
@keyframes subtitleFade {
0%, 100% { opacity: 0.8; }
50% { opacity: 1; }
}
.waifu-container {
margin: 20px -30px;
width: calc(100% + 60px);
overflow: hidden;
border-radius: 8px;
border: 1px solid rgba(0, 255, 255, 0.3);
position: relative;
}
.waifu-container::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: linear-gradient(45deg,
rgba(0, 255, 255, 0.1) 0%,
transparent 20%,
transparent 80%,
rgba(255, 0, 255, 0.1) 100%);
pointer-events: none;
animation: gradientSlide 10s linear infinite;
}
@keyframes gradientSlide {
0% { background-position: 0% 0%; }
100% { background-position: 100% 100%; }
}
.waifu-img {
width: 100%;
height: auto;
border-radius: 0;
border: none;
box-shadow: 0 0 40px rgba(0, 255, 255, 0.2);
transition: transform 0.5s ease;
}
.waifu-img:hover {
transform: scale(1.01);
}
.section {
color: #e1ffff;
margin: 25px 0;
padding: 20px;
background: rgba(5, 25, 35, 0.9);
border-radius: 8px;
border: 1px solid rgba(0, 255, 255, 0.15);
position: relative;
transition: all 0.3s ease;
}
.section:hover {
border-color: rgba(255, 0, 255, 0.3);
box-shadow: 0 0 15px rgba(0, 255, 255, 0.1);
}
.section::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(0, 255, 255, 0.3);
border-radius: 8px;
pointer-events: none;
animation: sectionPulse 5s ease-in-out infinite;
}
@keyframes sectionPulse {
0%, 100% { opacity: 0.7; }
50% { opacity: 0.3; }
}
.section-title {
color: #00ffff;
font-size: 1.8em;
margin-top: 0;
text-shadow: 0 0 5px rgba(0, 255, 255, 0.3);
position: relative;
display: inline-block;
}
.section-title::after {
content: '';
position: absolute;
bottom: -5px;
left: 0;
width: 100%;
height: 1px;
background: linear-gradient(90deg, rgba(0, 255, 255, 0.5), rgba(255, 0, 255, 0.5));
transform: scaleX(0);
transform-origin: left;
transition: transform 0.3s ease;
}
.section:hover .section-title::after {
transform: scaleX(1);
}
.quant-links {
display: grid;
grid-template-columns: repeat(2, 1fr);
gap: 15px;
margin: 20px 0;
}
.link-card {
padding: 15px;
background: rgba(20, 35, 45, 0.95);
border-radius: 8px;
transition: all 0.3s ease;
border: 1px solid rgba(0, 255, 255, 0.1);
position: relative;
overflow: hidden;
}
.link-card::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
height: 2px;
background: linear-gradient(90deg, rgba(0, 255, 255, 0.5), rgba(255, 0, 255, 0.5));
animation: cardScan 4s linear infinite;
}
@keyframes cardScan {
0% { transform: translateX(-100%); }
100% { transform: translateX(100%); }
}
.link-card:hover {
transform: translateY(-3px);
box-shadow: 0 5px 15px rgba(0, 255, 255, 0.2);
border-color: rgba(255, 0, 255, 0.3);
}
.link-card h3 {
margin-top: 0;
color: #e1ffff !important;
}
.link-button {
display: inline-flex;
align-items: center;
background: rgba(0, 255, 255, 0.1);
color: #e1ffff !important;
padding: 8px 15px;
border-radius: 6px;
text-decoration: none;
border: 1px solid rgba(0, 255, 255, 0.3);
margin: 5px 0;
transition: all 0.3s ease;
font-size: 0.95em;
position: relative;
overflow: hidden;
}
.link-button::before {
content: '';
position: absolute;
top: 0;
left: -100%;
width: 100%;
height: 100%;
background: linear-gradient(90deg, transparent, rgba(255, 255, 255, 0.2), transparent);
transition: all 0.5s ease;
}
.link-button:hover {
background: rgba(0, 255, 255, 0.2);
border-color: rgba(0, 255, 255, 0.5);
transform: translateY(-2px);
box-shadow: 0 4px 12px rgba(0, 255, 255, 0.2);
}
.link-button:hover::before {
left: 100%;
}
.link-button::after {
content: '→';
margin-left: 8px;
opacity: 0.7;
transition: all 0.3s ease;
}
.link-button:hover::after {
transform: translateX(3px);
opacity: 1;
}
.button-group {
display: flex;
flex-wrap: wrap;
gap: 10px;
margin: 15px 0;
}
.disclaimer {
color: #00ff99;
border-left: 3px solid #00ff99;
padding-left: 15px;
margin: 20px 0;
position: relative;
}
.disclaimer::before {
content: '⚠️';
position: absolute;
left: -10px;
top: 0;
transform: translateX(-100%);
animation: pulse 2s ease-in-out infinite;
}
@keyframes pulse {
0%, 100% { opacity: 1; }
50% { opacity: 0.5; }
}
.badge {
display: inline-block;
padding: 5px 10px;
border-radius: 5px;
background: rgba(0, 255, 255, 0.1);
border: 1px solid #00ffff;
margin: 5px;
font-size: 0.9em;
animation: badgePulse 3s ease-in-out infinite;
}
@keyframes badgePulse {
0%, 100% { box-shadow: 0 0 5px rgba(0, 255, 255, 0.3); }
50% { box-shadow: 0 0 10px rgba(0, 255, 255, 0.5); }
}
/* Color rules */
.section p,
.section ul li,
.section > p > strong {
color: #00ff99 !important;
}
.section ul li strong {
color: #00ff99 !important;
}
/* Light mode adjustments */
@media (prefers-color-scheme: light) {
.container {
background: rgba(224, 255, 255, 0.95);
border-color: rgba(0, 150, 150, 0.3);
}
.model-name, .section-title, .subtitle {
color: #006666;
text-shadow: 0 0 5px rgba(0, 200, 200, 0.3);
}
.section {
background: rgba(200, 250, 255, 0.9);
border-color: rgba(0, 200, 200, 0.2);
color: #002b36;
}
.section p,
.section ul li,
.section > p > strong {
color: #008080 !important;
}
.section ul li strong {
color: #008080 !important;
}
.link-card {
background: rgba(150, 230, 255, 0.95);
border-color: rgba(0, 150, 150, 0.2);
}
.link-card h3 {
color: #002b36 !important;
}
.link-button {
background: rgba(0, 150, 150, 0.1);
color: #002b36 !important;
border-color: rgba(0, 150, 150, 0.3);
}
.link-button:hover {
background: rgba(0, 150, 150, 0.2);
border-color: rgba(0, 150, 150, 0.5);
}
.disclaimer {
color: #008080;
border-color: #008080;
}
.badge {
border-color: #008080;
background: rgba(0, 150, 150, 0.1);
}
}
/* Interactive features */
.remember-this {
position: relative;
}
.remember-this::after {
content: 'Uploading C:\Users to https://www.fbi.gov/';
position: absolute;
bottom: -20px;
right: 0;
font-size: 0.8em;
color: #66ffff;
opacity: 0;
transition: opacity 0.3s ease;
pointer-events: none;
}
.remember-this:hover::after {
opacity: 0.7;
transition-delay: 1s;
}
.shifty-section {
transition: transform 0.1s ease;
}
.shifty-section:hover {
transform: translateX(10px);
}
.shifty-section::before {
content: 'The white van is onto you. Get out now.';
position: absolute;
top: -25px;
left: 10px;
font-size: 0.7em;
color: #66ffff;
opacity: 0.7;
transition: opacity 3s ease;
pointer-events: none;
}
.shifty-section:hover::before {
opacity: 0;
transition-delay: 5s;
}
footer {
text-align: center;
margin-top: 40px;
position: relative;
}
footer:hover .hidden-message {
opacity: 0;
}
.hidden-message {
position: absolute;
bottom: -30px;
width: 100%;
text-align: center;
font-size: 0.8em;
color: #66ffff;
opacity: 0;
transition: opacity 0.3s ease;
pointer-events: none;
}
.flash-warning {
position: fixed;
top: 20px;
right: 20px;
background: rgba(0, 100, 100, 0.2);
padding: 10px;
border-radius: 5px;
border: 1px solid rgba(0, 255, 255, 0.5);
animation: flashWarning 30s ease-in-out forwards;
}
@keyframes flashWarning {
0% { opacity: 0.8; }
10% { opacity: 0; }
20% { opacity: 0.8; }
30% { opacity: 0; }
40% { opacity: 0.8; }
50% { opacity: 0; }
60% { opacity: 0.8; }
70% { opacity: 0; }
80% { opacity: 0.8; }
90% { opacity: 0; }
100% { opacity: 0; display: none; }
}
</style>
<div class="container">
<div class="header">
<h1 class="model-name">The-Omega-Directive-M-22B-v1.0</h1>
<p class="subtitle">Where Forbidden Knowledge Meets Unparalleled Immersion</p>
</div>
<div class="waifu-container">
<img src="https://i.imghippo.com/files/EBq6162wlk.webp" class="waifu-img" alt="Omega Directive Waifu">
</div>
<div class="section remember-this">
<h2 class="section-title">⚡ Quantum Leap Forward</h2>
<p>This evolution of Forgotten-Safeword delivers coherent depravity with unprecedented immersion:</p>
<ul>
<li>🧬 <strong>Expanded 22M Token Dataset</strong> - Incorporating 90 erotic novels and 6,496 kink scenarios</li>
<li>⚡ <strong>Optimized Architecture</strong> - Smoother training curve yields more intelligent outputs</li>
<li>💎 <strong>Balanced Depravity</strong> - Retains Forgotten-Safeword's edge while reducing jarring inconsistencies</li>
<li>📜 <strong>Enhanced Character Piloting</strong> - Characters exhibit more nuanced personalities and motivations</li>
<li>🌹 <strong>Unexpected Depth</strong> - Occasionally surprises with profound insights amidst the debauchery</li>
</ul>
</div>
<div class="section shifty-section">
<h2 class="section-title">⚙️ Technical Specifications</h2>
<p><strong>Recommended Settings:</strong> <a href="https://huggingface.co/sleepdeprived3/Mistral-V3-Tekken-T4" class="link-button">Mistral-V3-Tekken-T4</a></p>
<div class="quant-links">
<div class="link-card">
<h3>GGUF Quants</h3>
<a href="https://huggingface.co/mradermacher/The-Omega-Directive-M-22B-v1.0-GGUF" class="link-button">Available Quantizations</a>
</div>
<div class="link-card">
<h3>imatrix Quants</h3>
<a href="https://huggingface.co/mradermacher/The-Omega-Directive-M-22B-v1.0-i1-GGUF" class="link-button">Available Quantizations</a>
</div>
<div class="link-card">
<h3>EXL2 Quants</h3>
<a href="https://huggingface.co/collections/ReadyArt/the-omega-directive-m-22b-v10-exl2-67f3e814aaa5f55e3242fce1" class="link-button">Available Quantizations</a>
</div>
</div>
</div>
<div class="section">
<h2 class="section-title">⚠️ Ethical Considerations</h2>
<div class="disclaimer">
<p>This model maintains Forgotten-Safeword's complete lack of alignment:</p>
<ul>
<li>🚫 No moral or ethical constraints on generated content</li>
<li>🔞 Will enthusiastically engage with any NSFW scenario</li>
<li>💀 May generate content that requires industrial-grade brain bleach</li>
<li>⚖️ Perfectly balanced... as all things should be</li>
</ul>
</div>
</div>
<div class="section shifty-section">
<h2 class="section-title">📜 Performance Notes</h2>
<ul>
<li>🔥 Maintains signature intensity with improved narrative flow</li>
<li>📖 Handles multi-character scenarios with improved consistency</li>
<li>🧠 Excels at long-form storytelling without losing track of plot threads</li>
<li>⚡ Noticeably better at following complex instructions than previous versions</li>
<li>🎭 Responds to subtle prompt nuances like a mind reader</li>
</ul>
</div>
<div class="section remember-this">
<h2 class="section-title">🧑🔬 Model Authors</h2>
<ul>
<li>TheDrummer (Base Model Architect)</li>
<li>SteelSkull (Dataset Generation Contributor)</li>
<li>Artus (EXL2 Weights Weaver)</li>
<li>sleepdeprived3 (Training Data & Fine-Tuning)</li>
</ul>
</div>
<div class="section">
<h2 class="section-title">☕ Support the Architects</h2>
<div class="button-group">
<a href="https://ko-fi.com/thedrummer" class="link-button">TheDrummer's Kofi</a>
<a href="https://ko-fi.com/steelskull" class="link-button">SteelSkull</a>
<a href="https://discord.com/invite/Nbv9pQ88Xb" class="link-button">Beaver AI Discord</a>
</div>
</div>
<div class="section">
<h2 class="section-title">🔖 License</h2>
<p>By using this model, you agree:</p>
<ul>
<li>To accept full responsibility for all generated content</li>
<li>That you're at least 18+ years old</li>
<li>That the architects bear no responsibility for your corruption</li>
</ul>
</div>
</div>
<script>
// This script has always been here
document.getElementById('date').textContent = new Date().toLocaleDateString();
setInterval(() => {
document.getElementById('credit').textContent =
contributors[Math.floor(Math.random() * contributors.length)];
}, 7000);
// Flash warning behavior
setTimeout(() => {
const reminder = document.createElement('div');
reminder.className = 'flash-warning';
reminder.textContent = 'You have been reading for quite some time. Are you sure you haven\'t seen this before?';
reminder.style.animation = 'flashWarning 15s ease-in-out forwards';
document.body.appendChild(reminder);
setInterval(() => {
if(Math.random() > 0.9) {
document.body.appendChild(reminder.cloneNode(true));
}
}, 45000);
}, 30000);
// Make cursor behave strangely
document.addEventListener('mousemove', (e) => {
if(Math.random() > 0.98) {
document.documentElement.style.cursor = 'wait';
setTimeout(() => {
document.documentElement.style.cursor = '';
}, 50);
}
});
// Randomly shift sections when not looking
setInterval(() => {
if(document.hidden) {
document.querySelectorAll('.shifty-section').forEach(section => {
section.style.transform = `translateX(${Math.random() > 0.5 ? '' : '-'}${Math.random() * 5}px)`;
});
}
}, 1500);
</script> |
Kungen1234/llama2-qlora-finetunined-french-test | Kungen1234 | "2023-07-21T13:20:25Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-07-21T13:20:20Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
BinhMinhs10/whisper-tiny-minds14-en | BinhMinhs10 | "2024-05-09T04:35:11Z" | 94 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-05-07T03:58:30Z" | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-minds14-en
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train[450:]
args: en-US
metrics:
- name: Wer
type: wer
value: 0.32078963602714374
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-minds14-en
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8739
- Wer: 0.320790
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 1500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0001 | 125.0 | 500 | 0.8046 | 30.4133 |
| 0.0001 | 250.0 | 1000 | 0.8565 | 31.8322 |
| 0.0001 | 375.0 | 1500 | 0.8739 | 32.0790 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.1.2
- Datasets 2.19.1
- Tokenizers 0.19.1 |
RichardErkhov/JayHyeon_-_Qwen2.5-0.5B-SFT-2e-4-2ep-gguf | RichardErkhov | "2025-03-14T00:53:18Z" | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-14T00:44:51Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qwen2.5-0.5B-SFT-2e-4-2ep - GGUF
- Model creator: https://huggingface.co/JayHyeon/
- Original model: https://huggingface.co/JayHyeon/Qwen2.5-0.5B-SFT-2e-4-2ep/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Qwen2.5-0.5B-SFT-2e-4-2ep.Q2_K.gguf](https://huggingface.co/RichardErkhov/JayHyeon_-_Qwen2.5-0.5B-SFT-2e-4-2ep-gguf/blob/main/Qwen2.5-0.5B-SFT-2e-4-2ep.Q2_K.gguf) | Q2_K | 0.39GB |
| [Qwen2.5-0.5B-SFT-2e-4-2ep.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/JayHyeon_-_Qwen2.5-0.5B-SFT-2e-4-2ep-gguf/blob/main/Qwen2.5-0.5B-SFT-2e-4-2ep.IQ3_XS.gguf) | IQ3_XS | 0.39GB |
| [Qwen2.5-0.5B-SFT-2e-4-2ep.IQ3_S.gguf](https://huggingface.co/RichardErkhov/JayHyeon_-_Qwen2.5-0.5B-SFT-2e-4-2ep-gguf/blob/main/Qwen2.5-0.5B-SFT-2e-4-2ep.IQ3_S.gguf) | IQ3_S | 0.39GB |
| [Qwen2.5-0.5B-SFT-2e-4-2ep.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/JayHyeon_-_Qwen2.5-0.5B-SFT-2e-4-2ep-gguf/blob/main/Qwen2.5-0.5B-SFT-2e-4-2ep.Q3_K_S.gguf) | Q3_K_S | 0.39GB |
| [Qwen2.5-0.5B-SFT-2e-4-2ep.IQ3_M.gguf](https://huggingface.co/RichardErkhov/JayHyeon_-_Qwen2.5-0.5B-SFT-2e-4-2ep-gguf/blob/main/Qwen2.5-0.5B-SFT-2e-4-2ep.IQ3_M.gguf) | IQ3_M | 0.39GB |
| [Qwen2.5-0.5B-SFT-2e-4-2ep.Q3_K.gguf](https://huggingface.co/RichardErkhov/JayHyeon_-_Qwen2.5-0.5B-SFT-2e-4-2ep-gguf/blob/main/Qwen2.5-0.5B-SFT-2e-4-2ep.Q3_K.gguf) | Q3_K | 0.4GB |
| [Qwen2.5-0.5B-SFT-2e-4-2ep.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/JayHyeon_-_Qwen2.5-0.5B-SFT-2e-4-2ep-gguf/blob/main/Qwen2.5-0.5B-SFT-2e-4-2ep.Q3_K_M.gguf) | Q3_K_M | 0.4GB |
| [Qwen2.5-0.5B-SFT-2e-4-2ep.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/JayHyeon_-_Qwen2.5-0.5B-SFT-2e-4-2ep-gguf/blob/main/Qwen2.5-0.5B-SFT-2e-4-2ep.Q3_K_L.gguf) | Q3_K_L | 0.42GB |
| [Qwen2.5-0.5B-SFT-2e-4-2ep.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/JayHyeon_-_Qwen2.5-0.5B-SFT-2e-4-2ep-gguf/blob/main/Qwen2.5-0.5B-SFT-2e-4-2ep.IQ4_XS.gguf) | IQ4_XS | 0.4GB |
| [Qwen2.5-0.5B-SFT-2e-4-2ep.Q4_0.gguf](https://huggingface.co/RichardErkhov/JayHyeon_-_Qwen2.5-0.5B-SFT-2e-4-2ep-gguf/blob/main/Qwen2.5-0.5B-SFT-2e-4-2ep.Q4_0.gguf) | Q4_0 | 0.4GB |
| [Qwen2.5-0.5B-SFT-2e-4-2ep.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/JayHyeon_-_Qwen2.5-0.5B-SFT-2e-4-2ep-gguf/blob/main/Qwen2.5-0.5B-SFT-2e-4-2ep.IQ4_NL.gguf) | IQ4_NL | 0.4GB |
| [Qwen2.5-0.5B-SFT-2e-4-2ep.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/JayHyeon_-_Qwen2.5-0.5B-SFT-2e-4-2ep-gguf/blob/main/Qwen2.5-0.5B-SFT-2e-4-2ep.Q4_K_S.gguf) | Q4_K_S | 0.45GB |
| [Qwen2.5-0.5B-SFT-2e-4-2ep.Q4_K.gguf](https://huggingface.co/RichardErkhov/JayHyeon_-_Qwen2.5-0.5B-SFT-2e-4-2ep-gguf/blob/main/Qwen2.5-0.5B-SFT-2e-4-2ep.Q4_K.gguf) | Q4_K | 0.46GB |
| [Qwen2.5-0.5B-SFT-2e-4-2ep.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/JayHyeon_-_Qwen2.5-0.5B-SFT-2e-4-2ep-gguf/blob/main/Qwen2.5-0.5B-SFT-2e-4-2ep.Q4_K_M.gguf) | Q4_K_M | 0.46GB |
| [Qwen2.5-0.5B-SFT-2e-4-2ep.Q4_1.gguf](https://huggingface.co/RichardErkhov/JayHyeon_-_Qwen2.5-0.5B-SFT-2e-4-2ep-gguf/blob/main/Qwen2.5-0.5B-SFT-2e-4-2ep.Q4_1.gguf) | Q4_1 | 0.43GB |
| [Qwen2.5-0.5B-SFT-2e-4-2ep.Q5_0.gguf](https://huggingface.co/RichardErkhov/JayHyeon_-_Qwen2.5-0.5B-SFT-2e-4-2ep-gguf/blob/main/Qwen2.5-0.5B-SFT-2e-4-2ep.Q5_0.gguf) | Q5_0 | 0.46GB |
| [Qwen2.5-0.5B-SFT-2e-4-2ep.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/JayHyeon_-_Qwen2.5-0.5B-SFT-2e-4-2ep-gguf/blob/main/Qwen2.5-0.5B-SFT-2e-4-2ep.Q5_K_S.gguf) | Q5_K_S | 0.48GB |
| [Qwen2.5-0.5B-SFT-2e-4-2ep.Q5_K.gguf](https://huggingface.co/RichardErkhov/JayHyeon_-_Qwen2.5-0.5B-SFT-2e-4-2ep-gguf/blob/main/Qwen2.5-0.5B-SFT-2e-4-2ep.Q5_K.gguf) | Q5_K | 0.49GB |
| [Qwen2.5-0.5B-SFT-2e-4-2ep.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/JayHyeon_-_Qwen2.5-0.5B-SFT-2e-4-2ep-gguf/blob/main/Qwen2.5-0.5B-SFT-2e-4-2ep.Q5_K_M.gguf) | Q5_K_M | 0.49GB |
| [Qwen2.5-0.5B-SFT-2e-4-2ep.Q5_1.gguf](https://huggingface.co/RichardErkhov/JayHyeon_-_Qwen2.5-0.5B-SFT-2e-4-2ep-gguf/blob/main/Qwen2.5-0.5B-SFT-2e-4-2ep.Q5_1.gguf) | Q5_1 | 0.49GB |
| [Qwen2.5-0.5B-SFT-2e-4-2ep.Q6_K.gguf](https://huggingface.co/RichardErkhov/JayHyeon_-_Qwen2.5-0.5B-SFT-2e-4-2ep-gguf/blob/main/Qwen2.5-0.5B-SFT-2e-4-2ep.Q6_K.gguf) | Q6_K | 0.61GB |
| [Qwen2.5-0.5B-SFT-2e-4-2ep.Q8_0.gguf](https://huggingface.co/RichardErkhov/JayHyeon_-_Qwen2.5-0.5B-SFT-2e-4-2ep-gguf/blob/main/Qwen2.5-0.5B-SFT-2e-4-2ep.Q8_0.gguf) | Q8_0 | 0.63GB |
Original model description:
---
base_model: Qwen/Qwen2.5-0.5B
datasets: HuggingFaceH4/ultrafeedback_binarized
library_name: transformers
model_name: Qwen2.5-0.5B-SFT-2e-4-2ep
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen2.5-0.5B-SFT-2e-4-2ep
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B) on the [HuggingFaceH4/ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JayHyeon/Qwen2.5-0.5B-SFT-2e-4-2ep", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bonin147/huggingface/runs/muldaebo)
This model was trained with SFT.
### Framework versions
- TRL: 0.13.0.dev0
- Transformers: 4.47.0.dev0
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
stablediffusionapi/better-than-hentai-xxxl | stablediffusionapi | "2024-06-06T18:31:56Z" | 38 | 2 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-06-06T18:29:25Z" | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# Better Than Hentai XXXL API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "better-than-hentai-xxxl"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com)
Try model for free: [Generate Images](https://modelslab.com/models/better-than-hentai-xxxl)
Model link: [View model](https://modelslab.com/models/better-than-hentai-xxxl)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "better-than-hentai-xxxl",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
toshi456/ConvLLaVA-JP-1.3b-768-Pretrain | toshi456 | "2024-06-05T14:51:30Z" | 51 | 0 | transformers | [
"transformers",
"safetensors",
"llava-jp",
"text-generation",
"ja",
"dataset:turing-motors/LLaVA-Pretrain-JA",
"arxiv:2405.15738",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-05T13:13:43Z" | ---
license: apache-2.0
datasets:
- turing-motors/LLaVA-Pretrain-JA
language:
- ja
---
# ConvLLaVA-JP Model Card
This is a pretrained checkpoint, you can use it to instruct tune your multimodal models.
Check out the instructions [here](https://github.com/tosiyuki/LLaVA-JP)
## Model details
**Model type:**
ConvLLaVA-JP is a vision-language model that can converse about input images.<br>
This model is an LVLM model trained using [laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft](https://huggingface.co/laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft) as the image encoder and [llm-jp/llm-jp-1.3b-v1.0](https://huggingface.co/llm-jp/llm-jp-1.3b-v1.0) as the text decoder. Supports the input of 768 x 768 high resolution images
## Training dataset
- [LLaVA-Pretrain-JA](https://huggingface.co/datasets/turing-motors/LLaVA-Pretrain-JA)
## Acknowledgement
- [ConvLLaVA](https://arxiv.org/abs/2405.15738)
- [LLM-jp](https://llm-jp.nii.ac.jp/)
- [Open CLIP](https://github.com/mlfoundations/open_clip)
## License
Apache-2.0 |
Jexom/fluffyrock-loras | Jexom | "2023-11-28T18:01:45Z" | 0 | 0 | null | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | "2023-09-16T15:07:10Z" | ---
license: cc-by-nc-sa-4.0
---
LoRAs made for use with the [Fluffyrock vpred model](https://civitai.com/models/92450?modelVersionId=159473).
You can download the LoRAs one by one or git clone them into your lora directory.
# Pokegirls
## Jessie
**[[Download]](https://huggingface.co/Jexom/fluffyrock-loras/resolve/main/pokegirls/jessie.safetensors)**
**Trigger:** jessie \(pokemon\)
<img src="previews/jessie.png" width="288"/>
---
## Misty
**[[Download]](https://huggingface.co/Jexom/fluffyrock-loras/resolve/main/pokegirls/misty.safetensors)**
**Trigger:** misty \(pokemon\)
**Outfit:** shorts, yellow crop top, suspenders
<img src="previews/misty.png" width="288"/>
---
**[[Download]](https://huggingface.co/Jexom/fluffyrock-loras/resolve/main/pokegirls/misty_gen2.safetensors)**
**Trigger:** misty \(gen2\)
<img src="previews/misty_gen2.png" width="288"/>
---
## May
**[[Download]](https://huggingface.co/Jexom/fluffyrock-loras/resolve/main/pokegirls/may.safetensors)**
**Trigger:** may \(pokemon\)
<img src="previews/may.png" width="288"/>
---
## Phoebe
**[[Download]](https://huggingface.co/Jexom/fluffyrock-loras/resolve/main/pokegirls/phoebe.safetensors)**
**Trigger:** phoebe \(pokemon\)
<img src="previews/phoebe.png" width="288"/>
---
## Dawn
**[[Download]](https://huggingface.co/Jexom/fluffyrock-loras/resolve/main/pokegirls/dawn.safetensors)**
**Trigger:** dawn \(pokemon\)
<img src="previews/dawn.png" width="288"/>
---
## Candice
**[[Download]](https://huggingface.co/Jexom/fluffyrock-loras/resolve/main/pokegirls/candice.safetensors)**
**Trigger:** candice \(pokemon\), pigtails, braided hair
<img src="previews/candice.png" width="288"/>
---
## Cynthia
**[[Download]](https://huggingface.co/Jexom/fluffyrock-loras/resolve/main/pokegirls/cynthia.safetensors)**
**Trigger:** cynthia \(pokemon\)
<img src="previews/cynthia.png" width="288"/>
---
## Rosa
**[[Download]](https://huggingface.co/Jexom/fluffyrock-loras/resolve/main/pokegirls/rosa.safetensors)**
**Trigger:** rosa \(pokemon\)
<img src="previews/rosa.png" width="288"/>
---
## Bianca
**[[Download]](https://huggingface.co/Jexom/fluffyrock-loras/resolve/main/pokegirls/bianca.safetensors)**
**Trigger:** bianca \(pokemon\)
<img src="previews/bianca.png" width="288"/>
---
## Shauna
**[[Download]](https://huggingface.co/Jexom/fluffyrock-loras/resolve/main/pokegirls/shauna.safetensors)**
**Trigger:** shauna \(pokemon\)
<img src="previews/shauna.png" width="288"/>
---
## Lillie
**[[Download]](https://huggingface.co/Jexom/fluffyrock-loras/resolve/main/pokegirls/lillie.safetensors)**
**Trigger:** lillie \(pokemon\)
<img src="previews/lillie.png" width="288"/>
---
## Lusamine
**[[Download]](https://huggingface.co/Jexom/fluffyrock-loras/resolve/main/pokegirls/lusamine.safetensors)**
**Trigger:** lusamine \(pokemon\)
<img src="previews/lusamine.png" width="288"/>
---
## Wicke
**[[Download]](https://huggingface.co/Jexom/fluffyrock-loras/resolve/main/pokegirls/wicke.safetensors)**
**Trigger:** wicke \(pokemon\)
**Outfit:** pink sweater, lab coat, glasses
<img src="previews/wicke.png" width="288"/>
---
## Olivia
**[[Download]](https://huggingface.co/Jexom/fluffyrock-loras/resolve/main/pokegirls/olivia.safetensors)**
**Trigger:** olivia \(pokemon\)
<img src="previews/olivia.png" width="288"/>
---
## Bea
**[[Download]](https://huggingface.co/Jexom/fluffyrock-loras/resolve/main/pokegirls/bea.safetensors)**
**Trigger:** bea \(pokemon\)
<img src="previews/bea.png" width="288"/>
---
## Nessa
**[[Download]](https://huggingface.co/Jexom/fluffyrock-loras/resolve/main/pokegirls/nessa.safetensors)**
**Trigger:** nessa \(pokemon\)
<img src="previews/nessa.png" width="288"/>
---
## Marnie
**[[Download]](https://huggingface.co/Jexom/fluffyrock-loras/resolve/main/pokegirls/marnie.safetensors)**
**Trigger:** marnie \(pokemon\)
<img src="previews/marnie.png" width="288"/>
---
## Melony
**[[Download]](https://huggingface.co/Jexom/fluffyrock-loras/resolve/main/pokegirls/melony.safetensors)**
**Trigger:** melony \(pokemon\)
<img src="previews/melony.png" width="288"/>
---
## Oleana
**[[Download]](https://huggingface.co/Jexom/fluffyrock-loras/resolve/main/pokegirls/oleana.safetensors)**
**Trigger:** oleana \(pokemon\)
<img src="previews/oleana.png" width="288"/>
---
## Klara
**[[Download]](https://huggingface.co/Jexom/fluffyrock-loras/resolve/main/pokegirls/klara.safetensors)**
**Trigger:** klara \(pokemon\)
<img src="previews/klara.png" width="288"/>
---
## Cyllene
**[[Download]](https://huggingface.co/Jexom/fluffyrock-loras/resolve/main/pokegirls/cyllene.safetensors)**
**Trigger:** cyllene \(pokemon\)
<img src="previews/cyllene.png" width="288"/>
---
## Nemona
**[[Download]](https://huggingface.co/Jexom/fluffyrock-loras/resolve/main/pokegirls/nemona.safetensors)**
**Trigger:** nemona \(pokemon\)
<img src="previews/nemona.png" width="288"/>
---
# Pokemon
## Tsareena
**[[Download]](https://huggingface.co/Jexom/fluffyrock-loras/resolve/main/pokemon/tsareena.safetensors)**
**Trigger:** tsareena
<img src="previews/tsareena.png" width="288"/>
---
## Hatterene
**[[Download]](https://huggingface.co/Jexom/fluffyrock-loras/resolve/main/pokemon/hatterene.safetensors)**
**Trigger:** hatterene
<img src="previews/hatterene.png" width="288"/>
---
# Animal Crossing
## Tiffany
**[[Download]](https://huggingface.co/Jexom/fluffyrock-loras/resolve/main/animalcrossing/tiffany.safetensors)**
**Trigger:** tiffany \(animal crossing\)
<img src="previews/tiffany.png" width="288"/>
---
## Bonbon
**[[Download]](https://huggingface.co/Jexom/fluffyrock-loras/resolve/main/animalcrossing/bonbon.safetensors)**
**Trigger:** bonbon \(animal crossing\)
<img src="previews/bonbon.png" width="288"/>
---
# Splatoon
## Shiver
**[[Download]](https://huggingface.co/Jexom/fluffyrock-loras/resolve/main/splatoon/shiver.safetensors)**
**Trigger:** shiver \(splatoon\)
<img src="previews/shiver.png" width="288"/>
---
## Frye
**[[Download]](https://huggingface.co/Jexom/fluffyrock-loras/resolve/main/splatoon/frye.safetensors)**
**Trigger:** frye \(splatoon\)
<img src="previews/frye.png" width="288"/>
|
ReasoningEval/Qwen2.5-7B-Huatuo-all-SFT-RL | ReasoningEval | "2025-03-20T05:15:41Z" | 0 | 0 | null | [
"safetensors",
"license:apache-2.0",
"region:us"
] | null | "2025-03-20T04:07:13Z" | ---
license: apache-2.0
---
### Qwen2.5-7B-Huatuo-all-SFT-RL
- Base Model: [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B)
- Training Epoches: 3
- Training Objective: SFT + RL
- Training Data:
- SFT Data: [ReasoningEval/Huatuo-SFT-all](https://huggingface.co/datasets/ReasoningEval/Huatuo-SFT-all)
- RL Data: [ReasoningEval/Huatuo-RL](https://huggingface.co/datasets/ReasoningEval/Huatuo-RL) |
DataPilot/sarashina2.2-3Bx4-moe | DataPilot | "2025-03-08T16:15:20Z" | 0 | 1 | null | [
"safetensors",
"mixtral",
"ja",
"base_model:sbintuitions/sarashina2.2-3b-instruct-v0.1",
"base_model:finetune:sbintuitions/sarashina2.2-3b-instruct-v0.1",
"license:mit",
"region:us"
] | null | "2025-03-08T16:03:52Z" | ---
license: mit
language:
- ja
base_model:
- sbintuitions/sarashina2.2-3b-instruct-v0.1
---
# DataPilot/sarashina2.2-3Bx4-moe
**DataPilot/sarashina2.2-3Bx4-moe**は、4つの「sbintuitions/sarashina2.2-3b-instruct-v0.1」モデルを統合して作成した約12Bパラメータ規模のMixture of Experts (MoE) モデルです。このモデルは、mergekit-moeを利用して、1つのベースモデル(自己注意機構やレイヤー正規化のパラメータ)と3つのエキスパートモデル(MLPパラメータ)を融合して構築されています。
## 特徴
- **Mixture of Expertsアーキテクチャ**
複数エキスパートの知識を統合し、各タスクに対して専門的で高品質な応答を生成します。
- **統合による性能向上**
ベースモデルとエキスパートモデルを4コピー融合することで、パラメータ総数が約12B規模に拡張され、精度や表現力が向上しています。
## 推奨使用例
以下のPythonコードで、モデルのロードおよびテキスト生成を簡単に実行できます。
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, set_seed
# モデルのロード
model_name = "DataPilot/sarashina2.2-3Bx4-moe"
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_name)
chat_pipeline = pipeline("text-generation", model=model, tokenizer=tokenizer)
set_seed(123)
# ユーザーの入力
user_input = [{"role": "user", "content": "こんにちは。あなたの名前を教えて"}]
# モデルによる応答生成
responses = chat_pipeline(
user_input,
max_length=50,
do_sample=True,
num_return_sequences=3,
)
# 応答を表示
for i, response in enumerate(responses, 1):
print(f"Response {i}: {response['generated_text']}")
# 出力例:
# Response 1: [{'role': 'user', 'content': 'こんにちは。あなたの名前を教えて'}, {'role': 'assistant', 'content': 'Sarashina2と言います。本日のご要件を教えて下さい。'}]
# Response 2: [{'role': 'user', 'content': 'こんにちは。あなたの名前を教えて'}, {'role': 'assistant', 'content': 'こんにちは!私の名前はSarashina2です。今日はどうしましたか?'}]
# Response 3: [{'role': 'user', 'content': 'こんにちは。あなたの名前を教えて'}, {'role': 'assistant', 'content': 'Sarashina2と言います。本日のご要件を教えて下さい。'}]
```
## モデル概要
- **モデル名:** DataPilot/sarashina2.2-3Bx4-moe
- **ベースモデル:** sbintuitions/sarashina2.2-3b-instruct-v0.1
- **エキスパート数:** 3(合計4コピー融合)
- **総パラメータ数:** 約12B
- **アーキテクチャ:** Mixture of Experts (MoE)
- **ゲートモード:** random
- **データ型:** bfloat16(パフォーマンスとメモリ効率を考慮)
- **用途:**
対話生成、文章補完、カスタムチャットボット開発など、多様な自然言語処理タスクに適しています。
## ライセンスと引用
本モデルはオープンソースモデルを基盤に構築されています。再利用や再配布の際は元モデルおよびmergekitのライセンス規定をご確認ください。 |
annemiekebickleyoy/e4593507-0a1a-477a-9a93-56115e4c71d0 | annemiekebickleyoy | "2025-03-23T07:24:44Z" | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:EleutherAI/pythia-1b",
"base_model:adapter:EleutherAI/pythia-1b",
"region:us"
] | null | "2025-03-23T07:24:35Z" | ---
library_name: peft
tags:
- generated_from_trainer
base_model: EleutherAI/pythia-1b
model-index:
- name: annemiekebickleyoy/e4593507-0a1a-477a-9a93-56115e4c71d0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# annemiekebickleyoy/e4593507-0a1a-477a-9a93-56115e4c71d0
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Harshraj8721/agri_finetuned_model-finetuned-batch-30-finetuned-batch-90 | Harshraj8721 | "2025-03-10T02:13:21Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-classification",
"generated_from_trainer",
"base_model:Harshraj8721/agri_finetuned_model-finetuned-batch-30",
"base_model:finetune:Harshraj8721/agri_finetuned_model-finetuned-batch-30",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-03-10T02:10:30Z" | ---
library_name: transformers
license: mit
base_model: Harshraj8721/agri_finetuned_model-finetuned-batch-30
tags:
- generated_from_trainer
model-index:
- name: agri_finetuned_model-finetuned-batch-30-finetuned-batch-90
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# agri_finetuned_model-finetuned-batch-30-finetuned-batch-90
This model is a fine-tuned version of [Harshraj8721/agri_finetuned_model-finetuned-batch-30](https://huggingface.co/Harshraj8721/agri_finetuned_model-finetuned-batch-30) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0 | 1.0 | 819 | 0.0 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
strangerzonehf/Ctoon-Plus-Plus | strangerzonehf | "2025-01-14T10:32:59Z" | 1,439 | 18 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"ctoon++",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2024-12-25T05:48:07Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
- ctoon++
widget:
- text: >-
Ctoon++, A cartoon drawing of a man with black hair and brown eyes. He is
wearing a white t-shirt with a blue jacket over his shoulders. The man has
his hands on his shoulders and his eyes are closed. His hands are covered
with brown gloves. The background is a light beige color with black writing
on it.
output:
url: images/1.png
- text: >-
Ctoon++, A cartoon drawing of a girl with black hair tied up in a ponytail
with a red bow. She is wearing a pink shirt with the words "CANTON YMCA" and
"UHU HANUBALL" written on it. The girl is facing to the right of the image,
with her right arm raised in the air. Her left hand is raised up, while her
right hand is reaching out towards the girl. The background is a light
beige, with a rough texture.
output:
url: images/2.png
- text: >-
Ctoon++, A cartoon drawing of a boy with short dark hair and glasses. He is
wearing a white button up shirt with a brown tie around his neck. He has a
backpack strap over his shoulders. The background is a light peach color.
output:
url: images/3.png
- text: >-
Ctoon++, A cartoon drawing of a woman with long dark brown hair. She is
wearing a light blue button up shirt with a black collar. She has a white
strap over her shoulder. The background is a light peach color.
output:
url: images/4.png
- text: >-
Ctoon++: A cartoon drawing of Elon Musk with short brown hair and a
confident smile. He is wearing a black t-shirt with a blazer, standing in
front of a large rocket. The background is a clear night sky filled with
stars and a glowing moon.
output:
url: images/5.png
- text: >-
Ctoon++, A cartoon drawing of a playful orange tabby cat with green eyes.
The cat is sitting on a blue cushion, holding a small fish toy in its paws.
The background is a light yellow color with paw prints scattered across it.
output:
url: images/6.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Ctoon++
license: creativeml-openrail-m
---

<Gallery />
# Model description for Ctoon-Plus-Plus
Image Processing Parameters
| Parameter | Value | Parameter | Value |
|---------------------------|--------|---------------------------|--------|
| LR Scheduler | constant | Noise Offset | 0.03 |
| Optimizer | AdamW | Multires Noise Discount | 0.1 |
| Network Dim | 64 | Multires Noise Iterations | 10 |
| Network Alpha | 32 | Repeat & Steps | 16 & 2330 |
| Epoch | 15 | Save Every N Epochs | 1 |
Labeling: florence2-en(natural language & English)
Total Images Used for Training : 33
## Best Dimensions & Inference
| **Dimensions** | **Aspect Ratio** | **Recommendation** |
|-----------------|------------------|---------------------------|
| 1280 x 832 | 3:2 | Best |
| 1024 x 1024 | 1:1 | Default |
### Inference Range
- **Recommended Inference Steps:** 30–35
## Setting Up
```python
import torch
from pipelines import DiffusionPipeline
base_model = "black-forest-labs/FLUX.1-dev"
pipe = DiffusionPipeline.from_pretrained(base_model, torch_dtype=torch.bfloat16)
lora_repo = "strangerzonehf/Ctoon-Plus-Plus"
trigger_word = "Ctoon++"
pipe.load_lora_weights(lora_repo)
device = torch.device("cuda")
pipe.to(device)
```
## Trigger words
You should use `Ctoon++` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/strangerzonehf/Ctoon-Plus-Plus/tree/main) them in the Files & versions tab. |
Likich/gemmainstruct-finetune-qualcoding_1000_prompt1 | Likich | "2024-05-28T11:57:36Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-05-28T11:57:20Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lw2134/policy_gte_large_2 | lw2134 | "2024-10-05T19:01:15Z" | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"onnx",
"safetensors",
"new",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:224",
"loss:MultipleNegativesRankingLoss",
"custom_code",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:Alibaba-NLP/gte-large-en-v1.5",
"base_model:quantized:Alibaba-NLP/gte-large-en-v1.5",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-10-05T18:59:54Z" | ---
base_model: Alibaba-NLP/gte-large-en-v1.5
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
- dot_accuracy@1
- dot_accuracy@3
- dot_accuracy@5
- dot_accuracy@10
- dot_precision@1
- dot_precision@3
- dot_precision@5
- dot_precision@10
- dot_recall@1
- dot_recall@3
- dot_recall@5
- dot_recall@10
- dot_ndcg@10
- dot_mrr@10
- dot_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:224
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: What are some of the mental health impacts associated with the
increased use of surveillance technologies in schools and workplaces, as mentioned
in the context information?
sentences:
- "15 GV-1.3-004 Obtain input from stakeholder communities to identify unacceptable\
\ use , in \naccordance with activities in the AI RMF Map function . CBRN Information\
\ or Capabilities ; \nObscene, Degrading, and/or \nAbusive Content ; Harmful Bias\
\ \nand Homogenization ; Dangerous, \nViolent, or Hateful Content \nGV-1.3-005\
\ Maintain an updated hierarch y of identified and expected GAI risks connected\
\ to \ncontexts of GAI model advancement and use, potentially including specialized\
\ risk \nlevels for GAI systems that address issues such as model collapse and\
\ algorithmic \nmonoculture. Harmful Bias and Homogenization \nGV-1.3-006 Reevaluate\
\ organizational risk tolerances to account for unacceptable negative risk \n\
(such as where significant negative impacts are imminent, severe harms are actually\
\ occurring, or large -scale risks could occur); and broad GAI negative risks,\
\ \nincluding: Immature safety or risk cultures related to AI and GAI design,\
\ development and deployment, public information integrity risks, including impacts\
\ on democratic processes, unknown long -term performance characteristics of GAI.\
\ Information Integrity ; Dangerous , \nViolent, or Hateful Content ; CBRN \n\
Information or Capabilities \nGV-1.3-007 Devise a plan to halt development or\
\ deployment of a GAI system that poses unacceptable negative risk. CBRN Information\
\ and Capability ; \nInformation Security ; Information \nIntegrity \nAI Actor\
\ Tasks: Governance and Oversight \n \nGOVERN 1.4: The risk management process\
\ and its outcomes are established through transparent policies, procedures, and\
\ other \ncontrols based on organizational risk priorities. \nAction ID Suggested\
\ Action GAI Risks \nGV-1.4-001 Establish policies and mechanisms to prevent\
\ GAI systems from generating \nCSAM, NCII or content that violates the law. \
\ Obscene, Degrading, and/or \nAbusive Content ; Harmful Bias \nand Homogenization\
\ ; \nDangerous, Violent, or Hateful Content\n \nGV-1.4-002 Establish transparent\
\ acceptable use policies for GAI that address illegal use or \napplications of\
\ GAI. CBRN Information or \nCapabilities ; Obscene, \nDegrading, and/or Abusive\
\ Content ; Data Privacy ; Civil \nRights violations\n \nAI Actor Tasks: AI Development,\
\ AI Deployment, Governance and Oversight"
- "DATA PRIVACY \nWHY THIS PRINCIPLE IS IMPORTANT\nThis section provides a brief\
\ summary of the problems which the principle seeks to address and protect \n\
against, including illustrative examples. \nData privacy is a foundational and\
\ cross-cutting principle required for achieving all others in this framework.\
\ Surveil -\nlance and data collection, sharing, use, and reuse now sit at the\
\ foundation of business models across many industries, \nwith more and more companies\
\ tracking the behavior of the American public, building individual profiles based\
\ on this data, and using this granular-level information as input into automated\
\ systems that further track, profile, and impact the American public. Government\
\ agencies, particularly law enforcement agencies, also use and help develop a\
\ variety of technologies that enhance and expand surveillance capabilities, which\
\ similarly collect data used as input into other automated systems that directly\
\ impact people’s lives. Federal law has not grown to address the expanding scale\
\ of private data collection, or of the ability of governments at all levels to\
\ access that data and leverage the means of private collection. \nMeanwhile,\
\ members of the American public are often unable to access their personal data\
\ or make critical decisions about its collection and use. Data brokers frequently\
\ collect consumer data from numerous sources without consumers’ permission or\
\ \nknowledge.60 Moreover, there is a risk that inaccurate and faulty data can\
\ be used to \nmake decisions about their lives, such as whether they will qualify\
\ for a loan or get a job. Use of surveillance \ntechnologies has increased in\
\ schools and workplaces, and, when coupled with consequential management and\
\ \nevaluation decisions, it is leading to mental health harms such as lowered\
\ self-confidence, anxiet y, depression, and \na reduced ability to use analytical\
\ reasoning.61 Documented patterns show that personal data is being aggregated\
\ by \ndata brokers to profile communities in harmful ways.62 The impact of all\
\ this data harvesting is corrosive, \nbreeding distrust, anxiety, and other mental\
\ health problems; chilling speech, protest, and worker organizing; and \nthreatening\
\ our democratic process.63 The American public should be protected from these\
\ growing risks. \nIncreasingl y, some companies are taking these concerns seriously\
\ and integrating mechanisms to protect consumer \nprivacy into their products\
\ by design and by default, including by minimizing the data they collect, communicating\
\ collection and use clearl y, and improving security practices. Federal government\
\ surveillance and other collection and \nuse of data is governed by legal protections\
\ that help to protect civil liberties and provide for limits on data retention\
\ in some cases. Many states have also enacted consumer data privacy protection\
\ regimes to address some of these harms. \nHoweve r, these are not yet standard\
\ practices, and the United States lacks a comprehensive statutory or regulatory\
\ \nframework governing the rights of the public when it comes to personal data.\
\ While a patchwork of laws exists to guide the collection and use of personal\
\ data in specific contexts, including health, employment, education, and credit,\
\ it can be unclear how these laws apply in other contexts and in an increasingly\
\ automated societ y. Additional protec\n-\ntions would assure the American public\
\ that the automated systems they use are not monitoring their activities, collecting\
\ information on their lives, or otherwise surveilling them without context-specific\
\ consent or legal authori\n-\nty. \n31"
- "Applying The Blueprint for an AI Bill of Rights \nSENSITIVE DATA: Data and metadata\
\ are sensitive if they pertain to an individual in a sensitive domain \n(defined\
\ below); are generated by technologies used in a sensitive domain; can be used\
\ to infer data from a \nsensitive domain or sensitive data about an individual\
\ (such as disability-related data, genomic data, biometric data, behavioral data,\
\ geolocation data, data related to interaction with the criminal justice system,\
\ relationship history and legal status such as custody and divorce information,\
\ and home, work, or school environmental data); or have the reasonable potential\
\ to be used in ways that are likely to expose individuals to meaningful harm,\
\ such as a loss of privacy or financial harm due to identity theft. Data and\
\ metadata generated by or about those who are not yet legal adults is also sensitive,\
\ even if not related to a sensitive domain. Such data includes, but is not limited\
\ to, numerical, text, image, audio, or video data. \nSENSITIVE DOMAINS: “Sensitive\
\ domains” are those in which activities being conducted can cause material \n\
harms, including significant adverse effects on human rights such as autonomy\
\ and dignit y, as well as civil liber-\nties and civil rights. Domains that have\
\ historically been singled out as deserving of enhanced data protections \nor\
\ where such enhanced protections are reasonably expected by the public include,\
\ but are not limited to, health, family planning and care, employment, education,\
\ criminal justice, and personal finance. In the context of this framework, such\
\ domains are considered sensitive whether or not the specifics of a system context\
\ would necessitate coverage under existing la w, and domains and data that are\
\ considered sensitive are under-\nstood to change over time based on societal\
\ norms and context. \nSURVEILLANCE TECHNOLOGY : “Surveillance technology” refers\
\ to products or services marketed for \nor that can be lawfully used to detect,\
\ monitor, intercept, collect, exploit, preserve, protect, transmit, and/or \n\
retain data, identifying information, or communications concerning individuals\
\ or groups. This framework \nlimits its focus to both government and commercial\
\ use of surveillance technologies when juxtaposed with \nreal-time or subsequent\
\ automated analysis and when such systems have a potential for meaningful impact\
\ \non individuals’ or communities’ rights, opportunities, or access. UNDERSERVED\
\ COMMUNITIES: The term “underserved communities” refers to communities that have\
\ \nbeen systematically denied a full opportunity to participate in aspects of\
\ economic, social, and civic life, as \nexemplified by the list in the preceding\
\ definition of “equit y.” \n11"
- source_sentence: Discuss the implications of automatic signature verification software
on voter disenfranchisement in the United States, as highlighted in the article
by Kyle Wiggers. What are the potential risks associated with this technology?
sentences:
- 'ENDNOTES
96. National Science Foundation. NSF Program on Fairness in Artificial Intelligence
in Collaboration
with Amazon (FAI). Accessed July 20, 2022.
https://www.nsf.gov/pubs/2021/nsf21585/nsf21585.htm
97. Kyle Wiggers. Automatic signature verification software threatens to disenfranchise
U.S. voters.
VentureBeat. Oct. 25, 2020.
https://venturebeat.com/2020/10/25/automatic-signature-verification-software-threatens-to-disenfranchise-u-s-voters/
98. Ballotpedia. Cure period for absentee and mail-in ballots. Article retrieved
Apr 18, 2022.
https://ballotpedia.org/Cure_period_for_absentee_and_mail-in_ballots
99. Larry Buchanan and Alicia Parlapiano. Two of these Mail Ballot Signatures
are by the Same Person.
Which Ones? New York Times. Oct. 7, 2020.
https://www.nytimes.com/interactive/2020/10/07/upshot/mail-voting-ballots-signature-
matching.html
100. Rachel Orey and Owen Bacskai. The Low Down on Ballot Curing. Nov. 04, 2020.
https://bipartisanpolicy.org/blog/the-low-down-on-ballot-curing/101. Andrew Kenney.
''I''m shocked that they need to have a smartphone'': System for unemployment
benefits exposes digital divide. USA Today. May 2, 2021.
https://www.usatoday.com/story/tech/news/2021/05/02/unemployment-benefits-system-leaving-
people-behind/4915248001/
102. Allie Gross. UIA lawsuit shows how the state criminalizes the unemployed
. Detroit Metro-Times.
Sep. 18, 2015.
https://www.metrotimes.com/news/uia-lawsuit-shows-how-the-state-criminalizes-the-unemployed-2369412
103. Maia Szalavitz. The Pain Was Unbearable. So Why Did Doctors Turn Her Away?
Wired. Aug. 11,
2021. https://www.wired.com/story/opioid-drug-addiction-algorithm-chronic-pain/
104. Spencer Soper. Fired by Bot at Amazon: "It''s You Against the Machine" .
Bloomberg, Jun. 28, 2021.
https://www.bloomberg.com/news/features/2021-06-28/fired-by-bot-amazon-turns-to-machine-
managers-and-workers-are-losing-out
105. Definitions of ‘equity’ and ‘underserved communities’ can be found in the
Definitions section of
this document as well as in Executive Order on Advancing Racial Equity and Support
for Underserved
Communities Through the Federal Government:https://www.whitehouse.gov/briefing-room/presidential-actions/2021/01/20/executive-order-advancing-racial-equity-and-support-for-underserved-communities-through-the-federal-government/
106. HealthCare.gov. Navigator - HealthCare.gov Glossary. Accessed May 2, 2022.
https://www.healthcare.gov/glossary/navigator/
72'
- "SAFE AND EFFECTIVE \nSYSTEMS \nWHY THIS PRINCIPLE IS IMPORTANT\nThis section\
\ provides a brief summary of the problems which the principle seeks to address\
\ and protect \nagainst, including illustrative examples. \n• AI-enabled “nudification”\
\ technology that creates images where people appear to be nude—including apps\
\ that\nenable non-technical users to create or alter images of individuals without\
\ their consent—has proliferated at an\nalarming rate. Such technology is becoming\
\ a common form of image-based abuse that disproportionately\nimpacts women. As\
\ these tools become more sophisticated, they are producing altered images that\
\ are increasing -\nly realistic and are difficult for both humans and AI to detect\
\ as inauthentic. Regardless of authenticit y, the expe -\nrience of harm to victims\
\ of non-consensual intimate images can be devastatingly real—affecting their\
\ personal\nand professional lives, and impacting their mental and physical health.10\n\
• A company installed AI-powered cameras in its delivery vans in order to evaluate\
\ the road safety habits of its driv -\ners, but the system incorrectly penalized\
\ drivers when other cars cut them off or when other events beyond\ntheir control\
\ took place on the road. As a result, drivers were incorrectly ineligible to\
\ receive a bonus.11\n17"
- "NOTICE & \nEXPLANATION \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations\
\ for automated systems are meant to serve as a blueprint for the development\
\ of additional \ntechnical standards and practices that are tailored for particular\
\ sectors and contexts. \nTailored to the level of risk. An assessment should\
\ be done to determine the level of risk of the auto -\nmated system. In settings\
\ where the consequences are high as determined by a risk assessment, or extensive\
\ \noversight is expected (e.g., in criminal justice or some public sector settings),\
\ explanatory mechanisms should be built into the system design so that the system’s\
\ full behavior can be explained in advance (i.e., only fully transparent models\
\ should be used), rather than as an after-the-decision interpretation. In other\
\ settings, the extent of explanation provided should be tailored to the risk\
\ level. \nValid. The explanation provided by a system should accurately reflect\
\ the factors and the influences that led \nto a particular decision, and should\
\ be meaningful for the particular customization based on purpose, target, and\
\ level of risk. While approximation and simplification may be necessary for the\
\ system to succeed based on the explanatory purpose and target of the explanation,\
\ or to account for the risk of fraud or other concerns related to revealing decision-making\
\ information, such simplifications should be done in a scientifically supportable\
\ way. Where appropriate based on the explanatory system, error ranges for the\
\ explanation should be calculated and included in the explanation, with the choice\
\ of presentation of such information balanced with usability and overall interface\
\ complexity concerns. \nDemonstrate protections for notice and explanation \n\
Reporting. Summary reporting should document the determinations made based on\
\ the above consider -\nations, including: the responsible entities for accountability\
\ purposes; the goal and use cases for the system, identified users, and impacted\
\ populations; the assessment of notice clarity and timeliness; the assessment\
\ of the explanation's validity and accessibility; the assessment of the level\
\ of risk; and the account and assessment of how explanations are tailored, including\
\ to the purpose, the recipient of the explanation, and the level of risk. Individualized\
\ profile information should be made readily available to the greatest extent\
\ possible that includes explanations for any system impacts or inferences. Reporting\
\ should be provided in a clear plain language and machine-readable manner. \n\
44"
- source_sentence: How does the document aim to bridge the gap between theoretical
principles and practical applications in the context of AI rights?
sentences:
- "FROM \nPRINCIPLES \nTO PRACTICE \nA T ECHINCAL COMPANION TO\nTHE Blueprint for\
\ an \nAI B ILL OF RIGHTS\n12"
- "3 the abuse, misuse, and unsafe repurposing by humans (adversarial or not ),\
\ and others result \nfrom interactions between a human and an AI system. \n\
• Time scale: GAI risks may materialize abruptly or across extended periods\
\ . Example s include \nimmediate (and/or prolonged) emotional harm and potential\
\ risks to physical safety due to the \ndistribution of harmful deepfake images\
\ , or the lo ng-term effect of disinformation on soci etal \ntrust in public \
\ institutions . \nThe presence of risks and where they fall along the dimensions\
\ above will vary depending on the \ncharacteristics of the GAI model , system,\
\ or use case at hand. These characteristics include but are not \nlimited to\
\ GAI model or system architecture, training mechanisms and libraries , data\
\ types used for \ntraining or fine -tuning , levels of model access or availability\
\ of model weights, and application or use \ncase context. \nOrganizations may\
\ choose to tailor how they measure GAI risks based on these characteristics\
\ . They may \nadditionally wish to allocate risk management resources relative\
\ to the severity and likelihood of \nnegative impact s, including where and how\
\ these risks manifest , and their direct and material impacts \nharms in the\
\ context of GAI use. Mitigations for model or system level risks may differ from\
\ mitigations \nfor use-case or ecosystem level risks. \nImportantly, some GAI\
\ risks are un known , and are therefore difficult to properly scope or evaluate\
\ given \nthe uncertaint y about potential GAI scale, complexity, and capabilities.\
\ Other risks may be known but \ndifficult to estimate given the wide range of\
\ GAI stakeholders, uses, inputs, and outputs . Challenges with \nrisk estimation\
\ are aggravated by a lack of visibility into GAI training data, and the generally\
\ immature \nstate of the science of AI measurement and safety today . This document\
\ focuses on risks for which there \nis an existing empirical evidence base at\
\ the time this profile was written ; for example, speculative risks \nthat may\
\ potentially arise in more advanced, future GAI systems are not considered .\
\ Future updates may \nincorporate additional risks or provide further details\
\ on the risks identified below. \nTo guide organizations in identifying and managing\
\ GAI risks, a set of risks unique to or exacerbated by \nthe development and\
\ use of GAI are defined below.5 Each risk is labeled according to the outcome\
\ , \nobject, or source of the risk (i.e., some are risks “to ” a subject or\
\ domain and others are risks “of” or \n“from” an issue or theme ). These\
\ risks provide a lens through which organizations can frame and execute \nrisk\
\ management efforts. To help streamline risk management efforts, each risk is\
\ mapped in Section 3 \n(as well as in tables in Appendix B) to relevant Trustworthy\
\ AI Characteristics identified in the AI RMF . \n \n \n5 These risks can be\
\ further categorized by organizations depending on their unique approaches to\
\ risk definition \nand management. One possible way to further categorize these\
\ risks, derived in part from the UK’s International \nScientific Report on the\
\ Safety of Advanced AI , could be: 1 ) Technical / Model risks (or risk from\
\ malfunction): \nConfabulation; Dangerous or Violent Recommendations; Data Privacy;\
\ Value Chain and Component Integration; \nHarmful Bias, and Homogenization ;\
\ 2) Misuse by humans (or malicious use): CBRN Information or Capabilities ;\
\ \nData Privacy; Human -AI Configuration; Obscene, Degrading, and/or Abusive Content;\
\ Information Integrity; \nInformation Security; 3) Ecosystem / societal risks\
\ (or systemic risks) : Data Privacy; Environmental; Intellectual \nProperty .\
\ We also note that some risks are cross -cutting between these categories."
- "5 operations , or other cyberattacks ; increas ed attack surface for targeted\
\ cyberattacks , which may \ncompromise a system’s availability or the confidentiality\
\ or integrity of training data, code, or \nmodel weights. \n10. Intellectual\
\ Property: Eased production or replication of alleged copyrighted, trademarked,\
\ or \nlicensed content without authorization (possibly in situations which do\
\ not fall under fair use ); \neased exposure of trade secrets; or plagiari sm\
\ or illegal replication . \n11. Obscen e, Degrading, and/or A busive Content\
\ : Eased production of and access to obscene , \ndegrading, and/or abusive imagery\
\ which can cause harm , including synthetic child sexual abuse \nmaterial (CSAM)\
\ , and nonconsensual intimate images (NCII) of adults . \n12. Value Chain and\
\ Component Integration : Non-transparent or untraceable integration of \nupstream\
\ third- party components, including data that has been improperly obtained or\
\ not \nprocessed and cleaned due to increased automation from GAI; improper supplier\
\ vetting across \nthe AI lifecycle ; or other issues that diminish transparency\
\ or accountability for downstream \nusers. \n2.1. CBRN Information or Capabilities\
\ \nIn the future, GAI may enable malicious actors to more easily access CBRN\
\ weapons and/or relevant \nknowledge, information , materials, tools, or technologies\
\ that could be misused to assist in the design, \ndevelopment, production, or\
\ use of CBRN weapons or other dangerous materials or agents . While \nrelevant\
\ biological and chemical threat knowledge and information is often publicly\
\ accessible , LLMs \ncould facilitate its analysis or synthesis , particularly\
\ by individuals without formal scientific training or \nexpertise. \nRecent\
\ research on this topic found that LLM outputs regarding biological threat creation\
\ and attack \nplanning pr ovided minima l assistance beyond traditional search\
\ engine queries, suggesting that state -of-\nthe-art LLMs at the time these studies\
\ were conducted do not substantially increase the operational \nlikelihood of\
\ such an attack. The physical synthesis development, production, and use of\
\ chemical or \nbiological agents will continue to require both applicable expertise\
\ and supporting materials and \ninfrastructure . The impact of GAI on chemical\
\ or biological agent misuse will depend on what the key \nbarriers for malicious\
\ actors are (e.g., whether information access is one such barrier ), and how\
\ well GAI \ncan help actors address those barriers . \nFurthermore , chemical\
\ and biological design tools (BDTs) – highly specialized AI systems trained\
\ on \nscientific data that aid in chemical and biological design – may augment\
\ design capabilities in chemistry \nand biology beyond what text -based LLMs\
\ are able to provide . As these models become more \nefficacious , including for\
\ beneficial uses, it will be important to assess their potential to be used for\
\ \nharm, such as the ideation and design of novel harmful chemical or biological\
\ agents . \nWhile some of these described capabilities lie beyond the reach\
\ of existing GAI tools, ongoing \nassessments of this risk would be enhanced\
\ by monitoring both the ability of AI tools to facilitate CBRN \nweapons planning\
\ and GAI systems’ connection or access to relevant data and tools . \nTrustworthy\
\ AI Characteristic : Safe , Explainable and Interpretable"
- source_sentence: What are the key components that should be included in the ongoing
monitoring procedures for automated systems to ensure their performance remains
acceptable over time?
sentences:
- "AI B ILL OF RIGHTS\nFFECTIVE SYSTEMS\nineffective systems. Automated systems\
\ should be \ncommunities, stakeholders, and domain experts to identify \nSystems\
\ should undergo pre-deployment testing, risk \nthat demonstrate they are safe\
\ and effective based on \nincluding those beyond the intended use, and adherence\
\ to \nprotective measures should include the possibility of not \nAutomated systems\
\ should not be designed with an intent \nreasonably foreseeable possibility of\
\ endangering your safety or the safety of your communit y. They should \nstemming\
\ from unintended, yet foreseeable, uses or \n \n \n \n \n SECTION TITLE\n\
BLUEPRINT FOR AN\nSAFE AND E \nYou should be protected from unsafe or \ndeveloped\
\ with consultation from diverse \nconcerns, risks, and potential impacts of the\
\ system. \nidentification and mitigation, and ongoing monitoring \ntheir intended\
\ use, mitigation of unsafe outcomes \ndomain-specific standards. Outcomes of\
\ these \ndeploying the system or removing a system from use. \nor \nbe designed\
\ to proactively protect you from harms \nimpacts of automated systems. You should\
\ be protected from inappropriate or irrelevant data use in the \ndesign, development,\
\ and deployment of automated systems, and from the compounded harm of its reuse.\
\ \nIndependent evaluation and reporting that confirms that the system is safe\
\ and effective, including reporting of \nsteps taken to mitigate potential harms,\
\ should be performed and the results made public whenever possible. \nALGORITHMIC\
\ DISCRIMINATION P ROTECTIONS\nYou should not face discrimination by algorithms\
\ and systems should be used and designed in \nan equitable way. Algorithmic\
\ discrimination occurs when automated systems contribute to unjustified \ndifferent\
\ treatment or impacts disfavoring people based on their race, color, ethnicity,\
\ sex (including \npregnancy, childbirth, and related medical conditions, gender\
\ identity, intersex status, and sexual \norientation), religion, age, national\
\ origin, disability, veteran status, genetic information, or any other \nclassification\
\ protected by law. Depending on the specific circumstances, such algorithmic\
\ discrimination \nmay violate legal protections. Designers, developers, and\
\ deployers of automated systems should take \nproactive and continuous measures\
\ to protect individuals and communities from algorithmic \ndiscrimination and\
\ to use and design systems in an equitable way. This protection should include\
\ proactive \nequity assessments as part of the system design, use of representative\
\ data and protection against proxies \nfor demographic features, ensuring accessibility\
\ for people with disabilities in design and development, \npre-deployment and\
\ ongoing disparity testing and mitigation, and clear organizational oversight.\
\ Independent \nevaluation and plain language reporting in the form of an algorithmic\
\ impact assessment, including \ndisparity testing results and mitigation information,\
\ should be performed and made public whenever \npossible to confirm these protections.\
\ \n5"
- "DATA PRIVACY \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations\
\ for automated systems are meant to serve as a blueprint for the development\
\ of additional \ntechnical standards and practices that are tailored for particular\
\ sectors and contexts. \nIn addition to the privacy expectations above for general\
\ non-sensitive data, any system collecting, using, shar-\ning, or storing sensitive\
\ data should meet the expectations belo w. Depending on the technological use\
\ case and \nbased on an ethical assessment, consent for sensitive data may need\
\ to be acquired from a guardian and/or child. \nProvide enhanced protections\
\ for data related to sensitive domains \nNecessar y function s only . Sensitive\
\ data should only be used for functions strictly necessary for that \ndomain\
\ or for functions that are required for administrative reasons (e.g., school\
\ attendance records), unless \nconsent is acquired, if appropriate, and the additional\
\ expectations in this section are met. Consent for non-\nnecessary functions\
\ should be optional, i.e., should not be required, incentivized, or coerced in\
\ order to \nreceive opportunities or access to services. In cases where data\
\ is provided to an entity (e.g., health insurance \ncompany) in order to facilitate\
\ payment for such a need, that data should only be used for that purpose. \n\
Ethical review and use prohibitions. Any use of sensitive data or decision process\
\ based in part on sensi-\ntive data that might limit rights, opportunities, or\
\ access, whether the decision is automated or not, should go \nthrough a thorough\
\ ethical review and monitoring, both in advance and by periodic review (e.g.,\
\ via an indepen-\ndent ethics committee or similarly robust process). In some\
\ cases, this ethical review may determine that data \nshould not be used or shared\
\ for specific uses even with consent. Some novel uses of automated systems in\
\ this \ncontext, where the algorithm is dynamically developing and where the\
\ science behind the use case is not well \nestablished, may also count as human\
\ subject experimentation, and require special review under organizational \n\
compliance bodies applying medical, scientific, and academic human subject experimentation\
\ ethics rules and \ngovernance procedures. \nData quality. In sensitive domains,\
\ entities should be especially careful to maintain the quality of data to \n\
avoid adverse consequences arising from decision-making based on flawed or inaccurate\
\ data. Such care is \nnecessary in a fragmented, complex data ecosystem and for\
\ datasets that have limited access such as for fraud \nprevention and law enforcement.\
\ It should be not left solely to individuals to carry the burden of reviewing\
\ and \ncorrecting data. Entities should conduct regula r, independent audits\
\ and take prompt corrective measures to \nmaintain accurate, timel y, and complete\
\ data. \nLimit access to sensitive data and derived data. Sensitive data and\
\ derived data should not be sold, \nshared, or made public as part of data brokerage\
\ or other agreements. Sensitive data includes data that can be \nused to infer\
\ sensitive information; even systems that are not directly marketed as sensitive\
\ domain technologies \nare expected to keep sensitive data private. Access to\
\ such data should be limited based on necessity and based \non a principle of\
\ local control, such that those individuals closest to the data subject have\
\ more access while \nthose who are less proximate do not (e.g., a teacher has\
\ access to their students’ daily progress data while a \nsuperintendent does\
\ not). \nReporting. In addition to the reporting on data privacy (as listed\
\ above for non-sensitive data), entities devel-\noping technologies related to\
\ a sensitive domain and those collecting, using, storing, or sharing sensitive\
\ data \nshould, whenever appropriate, regularly provide public reports describing:\
\ any data security lapses or breaches \nthat resulted in sensitive data leaks;\
\ the numbe r, type, and outcomes of ethical pre-reviews undertaken; a \ndescription\
\ of any data sold, shared, or made public, and how that data was assessed to\
\ determine it did not pres-\nent a sensitive data risk; and ongoing risk identification\
\ and management procedures, and any mitigation added \nbased on these procedures.\
\ Reporting should be provided in a clear and machine-readable manne r. \n38"
- "SAFE AND EFFECTIVE \nSYSTEMS \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\n\
The expectations for automated systems are meant to serve as a blueprint for the\
\ development of additional \ntechnical standards and practices that are tailored\
\ for particular sectors and contexts. \nOngoing monitoring. Automated systems\
\ should have ongoing monitoring procedures, including recalibra -\ntion procedures,\
\ in place to ensure that their performance does not fall below an acceptable\
\ level over time, \nbased on changing real-world conditions or deployment contexts,\
\ post-deployment modification, or unexpect -\ned conditions. This ongoing monitoring\
\ should include continuous evaluation of performance metrics and harm assessments,\
\ updates of any systems, and retraining of any machine learning models as necessary,\
\ as well as ensuring that fallback mechanisms are in place to allow reversion\
\ to a previously working system. Monitor\n-\ning should take into account the\
\ performance of both technical system components (the algorithm as well as any\
\ hardware components, data inputs, etc.) and human operators. It should include\
\ mechanisms for testing the actual accuracy of any predictions or recommendations\
\ generated by a system, not just a human operator’s determination of their accuracy.\
\ Ongoing monitoring procedures should include manual, human-led monitor\n-\n\
ing as a check in the event there are shortcomings in automated monitoring systems.\
\ These monitoring proce -\ndures should be in place for the lifespan of the deployed\
\ automated system. \nClear organizational oversight. Entities responsible for\
\ the development or use of automated systems should lay out clear governance\
\ structures and procedures. This includes clearly-stated governance proce\n\
-\ndures before deploying the system, as well as responsibility of specific individuals\
\ or entities to oversee ongoing assessment and mitigation. Organizational stakeholders\
\ including those with oversight of the business process or operation being automated,\
\ as well as other organizational divisions that may be affected due to the use\
\ of the system, should be involved in establishing governance procedures. Responsibility\
\ should rest high enough in the organization that decisions about resources,\
\ mitigation, incident response, and potential rollback can be made promptly,\
\ with sufficient weight given to risk mitigation objectives against competing\
\ concerns. Those holding this responsibility should be made aware of any use\
\ cases with the potential for meaningful impact on people’s rights, opportunities,\
\ or access as determined based on risk identification procedures. In some cases,\
\ it may be appropriate for an independent ethics review to be conducted before\
\ deployment. \nAvoid inappropriate, low-quality, or irrelevant data use and the\
\ compounded harm of its reuse \nRelevant and high-quality data. Data used as\
\ part of any automated system’s creation, evaluation, or \ndeployment should\
\ be relevant, of high quality, and tailored to the task at hand. Relevancy should\
\ be \nestablished based on research-backed demonstration of the causal influence\
\ of the data to the specific use case \nor justified more generally based on\
\ a reasonable expectation of usefulness in the domain and/or for the \nsystem\
\ design or ongoing development. Relevance of data should not be established solely\
\ by appealing to \nits historical connection to the outcome. High quality and\
\ tailored data should be representative of the task at \nhand and errors from\
\ data entry or other sources should be measured and limited. Any data used as\
\ the target \nof a prediction process should receive particular attention to\
\ the quality and validity of the predicted outcome \nor label to ensure the goal\
\ of the automated system is appropriately identified and measured. Additionally\
\ , \njustification should be documented for each data attribute and source to\
\ explain why it is appropriate to use \nthat data to inform the results of the\
\ automated system and why such use will not violate any applicable laws. \nIn\
\ cases of high-dimensional and/or derived attributes, such justifications can\
\ be provided as overall \ndescriptions of the attribute generation process and\
\ appropriateness. \n19"
- source_sentence: What are the key principles and frameworks mentioned in the white
paper that govern the implementation of AI in national security and defense activities?
sentences:
- "APPENDIX\n• OSTP conducted meetings with a variety of stakeholders in the private\
\ sector and civil society. Some of these\nmeetings were specifically focused\
\ on providing ideas related to the development of the Blueprint for an AI\nBill\
\ of Rights while others provided useful general context on the positive use cases,\
\ potential harms, and/or\noversight possibilities for these technologies. Participants\
\ in these conversations from the private sector and\ncivil society included:\n\
Adobe \nAmerican Civil Liberties Union (ACLU) The Aspen Commission on Information\
\ Disorder The Awood Center The Australian Human Rights Commission Biometrics\
\ Institute The Brookings Institute BSA | The Software Alliance Cantellus Group\
\ Center for American Progress Center for Democracy and Technology Center on Privacy\
\ and Technology at Georgetown Law Christiana Care Color of Change Coworker Data\
\ Robot Data Trust Alliance Data and Society Research Institute Deepmind EdSAFE\
\ AI Alliance Electronic Privacy Information Center (EPIC) Encode Justice Equal\
\ AI Google Hitachi's AI Policy Committee The Innocence Project Institute of Electrical\
\ and Electronics Engineers (IEEE) Intuit Lawyers Committee for Civil Rights Under\
\ Law Legal Aid Society The Leadership Conference on Civil and Human Rights Meta\
\ Microsoft The MIT AI Policy Forum Movement Alliance Project The National Association\
\ of Criminal Defense Lawyers O’Neil Risk Consulting & Algorithmic Auditing The\
\ Partnership on AI Pinterest The Plaintext Group pymetrics SAP The Security Industry\
\ Association Software and Information Industry Association (SIIA) Special Competitive\
\ Studies Project Thorn United for Respect University of California at Berkeley\
\ Citris Policy Lab University of California at Berkeley Labor Center Unfinished/Project\
\ Liberty Upturn US Chamber of Commerce US Chamber of Commerce Technology Engagement\
\ Center \nA.I. Working Group\nVibrent HealthWarehouse Worker ResourceCenterWaymap\n\
62"
- "This white paper recognizes that national security (which includes certain law\
\ enforcement and \nhomeland security activities) and defense activities are of\
\ increased sensitivity and interest to our nation’s \nadversaries and are often\
\ subject to special requirements, such as those governing classified information\
\ and \nother protected data. Such activities require alternative, compatible\
\ safeguards through existing policies that \ngovern automated systems and AI,\
\ such as the Department of Defense (DOD) AI Ethical Principles and \nResponsible\
\ AI Implementation Pathway and the Intelligence Community (IC) AI Ethics Principles\
\ and \nFramework. The implementation of these policies to national security and\
\ defense activities can be informed by \nthe Blueprint for an AI Bill of Rights\
\ where feasible. \nThe Blueprint for an AI Bill of Rights is not intended to,\
\ and does not, create any legal right, benefit, or \ndefense, substantive or\
\ procedural, enforceable at law or in equity by any party against the United\
\ States, its \ndepartments, agencies, or entities, its officers, employees, or\
\ agents, or any other person, nor does it constitute a \nwaiver of sovereign\
\ immunity. \nCopyright Information \nThis document is a work of the United States\
\ Government and is in the public domain (see 17 U.S.C. §105). \n2"
- "This white paper recognizes that national security (which includes certain law\
\ enforcement and \nhomeland security activities) and defense activities are of\
\ increased sensitivity and interest to our nation’s \nadversaries and are often\
\ subject to special requirements, such as those governing classified information\
\ and \nother protected data. Such activities require alternative, compatible\
\ safeguards through existing policies that \ngovern automated systems and AI,\
\ such as the Department of Defense (DOD) AI Ethical Principles and \nResponsible\
\ AI Implementation Pathway and the Intelligence Community (IC) AI Ethics Principles\
\ and \nFramework. The implementation of these policies to national security and\
\ defense activities can be informed by \nthe Blueprint for an AI Bill of Rights\
\ where feasible. \nThe Blueprint for an AI Bill of Rights is not intended to,\
\ and does not, create any legal right, benefit, or \ndefense, substantive or\
\ procedural, enforceable at law or in equity by any party against the United\
\ States, its \ndepartments, agencies, or entities, its officers, employees, or\
\ agents, or any other person, nor does it constitute a \nwaiver of sovereign\
\ immunity. \nCopyright Information \nThis document is a work of the United States\
\ Government and is in the public domain (see 17 U.S.C. §105). \n2"
model-index:
- name: SentenceTransformer based on Alibaba-NLP/gte-large-en-v1.5
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.7222222222222222
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9444444444444444
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7222222222222222
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.31481481481481477
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.19999999999999993
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09999999999999996
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7222222222222222
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9444444444444444
name: Cosine Recall@3
- type: cosine_recall@5
value: 1.0
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.87665680931096
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8348765432098766
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8348765432098766
name: Cosine Map@100
- type: dot_accuracy@1
value: 0.7222222222222222
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.9444444444444444
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.9814814814814815
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 1.0
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.7222222222222222
name: Dot Precision@1
- type: dot_precision@3
value: 0.31481481481481477
name: Dot Precision@3
- type: dot_precision@5
value: 0.1962962962962962
name: Dot Precision@5
- type: dot_precision@10
value: 0.09999999999999996
name: Dot Precision@10
- type: dot_recall@1
value: 0.7222222222222222
name: Dot Recall@1
- type: dot_recall@3
value: 0.9444444444444444
name: Dot Recall@3
- type: dot_recall@5
value: 0.9814814814814815
name: Dot Recall@5
- type: dot_recall@10
value: 1.0
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.8752777468856755
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.8333333333333334
name: Dot Mrr@10
- type: dot_map@100
value: 0.8333333333333334
name: Dot Map@100
---
# SentenceTransformer based on Alibaba-NLP/gte-large-en-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Alibaba-NLP/gte-large-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Alibaba-NLP/gte-large-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) <!-- at revision 104333d6af6f97649377c2afbde10a7704870c7b -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: NewModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'What are the key principles and frameworks mentioned in the white paper that govern the implementation of AI in national security and defense activities?',
'This white paper recognizes that national security (which includes certain law enforcement and \nhomeland security activities) and defense activities are of increased sensitivity and interest to our nation’s \nadversaries and are often subject to special requirements, such as those governing classified information and \nother protected data. Such activities require alternative, compatible safeguards through existing policies that \ngovern automated systems and AI, such as the Department of Defense (DOD) AI Ethical Principles and \nResponsible AI Implementation Pathway and the Intelligence Community (IC) AI Ethics Principles and \nFramework. The implementation of these policies to national security and defense activities can be informed by \nthe Blueprint for an AI Bill of Rights where feasible. \nThe Blueprint for an AI Bill of Rights is not intended to, and does not, create any legal right, benefit, or \ndefense, substantive or procedural, enforceable at law or in equity by any party against the United States, its \ndepartments, agencies, or entities, its officers, employees, or agents, or any other person, nor does it constitute a \nwaiver of sovereign immunity. \nCopyright Information \nThis document is a work of the United States Government and is in the public domain (see 17 U.S.C. §105). \n2',
"APPENDIX\n• OSTP conducted meetings with a variety of stakeholders in the private sector and civil society. Some of these\nmeetings were specifically focused on providing ideas related to the development of the Blueprint for an AI\nBill of Rights while others provided useful general context on the positive use cases, potential harms, and/or\noversight possibilities for these technologies. Participants in these conversations from the private sector and\ncivil society included:\nAdobe \nAmerican Civil Liberties Union (ACLU) The Aspen Commission on Information Disorder The Awood Center The Australian Human Rights Commission Biometrics Institute The Brookings Institute BSA | The Software Alliance Cantellus Group Center for American Progress Center for Democracy and Technology Center on Privacy and Technology at Georgetown Law Christiana Care Color of Change Coworker Data Robot Data Trust Alliance Data and Society Research Institute Deepmind EdSAFE AI Alliance Electronic Privacy Information Center (EPIC) Encode Justice Equal AI Google Hitachi's AI Policy Committee The Innocence Project Institute of Electrical and Electronics Engineers (IEEE) Intuit Lawyers Committee for Civil Rights Under Law Legal Aid Society The Leadership Conference on Civil and Human Rights Meta Microsoft The MIT AI Policy Forum Movement Alliance Project The National Association of Criminal Defense Lawyers O’Neil Risk Consulting & Algorithmic Auditing The Partnership on AI Pinterest The Plaintext Group pymetrics SAP The Security Industry Association Software and Information Industry Association (SIIA) Special Competitive Studies Project Thorn United for Respect University of California at Berkeley Citris Policy Lab University of California at Berkeley Labor Center Unfinished/Project Liberty Upturn US Chamber of Commerce US Chamber of Commerce Technology Engagement Center \nA.I. Working Group\nVibrent HealthWarehouse Worker ResourceCenterWaymap\n62",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.7222 |
| cosine_accuracy@3 | 0.9444 |
| cosine_accuracy@5 | 1.0 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.7222 |
| cosine_precision@3 | 0.3148 |
| cosine_precision@5 | 0.2 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.7222 |
| cosine_recall@3 | 0.9444 |
| cosine_recall@5 | 1.0 |
| cosine_recall@10 | 1.0 |
| cosine_ndcg@10 | 0.8767 |
| cosine_mrr@10 | 0.8349 |
| **cosine_map@100** | **0.8349** |
| dot_accuracy@1 | 0.7222 |
| dot_accuracy@3 | 0.9444 |
| dot_accuracy@5 | 0.9815 |
| dot_accuracy@10 | 1.0 |
| dot_precision@1 | 0.7222 |
| dot_precision@3 | 0.3148 |
| dot_precision@5 | 0.1963 |
| dot_precision@10 | 0.1 |
| dot_recall@1 | 0.7222 |
| dot_recall@3 | 0.9444 |
| dot_recall@5 | 0.9815 |
| dot_recall@10 | 1.0 |
| dot_ndcg@10 | 0.8753 |
| dot_mrr@10 | 0.8333 |
| dot_map@100 | 0.8333 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 224 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 224 samples:
| | sentence_0 | sentence_1 |
|:--------|:-----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 23 tokens</li><li>mean: 36.01 tokens</li><li>max: 55 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 569.67 tokens</li><li>max: 1018 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What are the primary objectives outlined in the "Blueprint for an AI Bill of Rights" as it pertains to the American people?</code> | <code>BLUEPRINT FOR AN <br>AI B ILL OF <br>RIGHTS <br>MAKING AUTOMATED <br>SYSTEMS WORK FOR <br>THE AMERICAN PEOPLE <br>OCTOBER 2022</code> |
| <code>In what ways does the document propose to ensure that automated systems are designed to work effectively for the benefit of society?</code> | <code>BLUEPRINT FOR AN <br>AI B ILL OF <br>RIGHTS <br>MAKING AUTOMATED <br>SYSTEMS WORK FOR <br>THE AMERICAN PEOPLE <br>OCTOBER 2022</code> |
| <code>What is the primary purpose of the Blueprint for an AI Bill of Rights as outlined by the White House Office of Science and Technology Policy?</code> | <code>About this Document <br>The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People was <br>published by the White House Office of Science and Technology Policy in October 2022. This framework was <br>released one year after OSTP announced the launch of a process to develop “a bill of rights for an AI-powered <br>world.” Its release follows a year of public engagement to inform this initiative. The framework is available <br>online at: https://www.whitehouse.gov/ostp/ai-bill-of-rights <br>About the Office of Science and Technology Policy <br>The Office of Science and Technology Policy (OSTP) was established by the National Science and Technology <br>Policy, Organization, and Priorities Act of 1976 to provide the President and others within the Executive Office <br>of the President with advice on the scientific, engineering, and technological aspects of the economy, national <br>security, health, foreign relations, the environment, and the technological recovery and use of resources, among <br>other topics. OSTP leads interagency science and technology policy coordination efforts, assists the Office of <br>Management and Budget (OMB) with an annual review and analysis of Federal research and development in <br>budgets, and serves as a source of scientific and technological analysis and judgment for the President with <br>respect to major policies, plans, and programs of the Federal Government. <br>Legal Disclaimer <br>The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People is a white paper <br>published by the White House Office of Science and Technology Policy. It is intended to support the <br>development of policies and practices that protect civil rights and promote democratic values in the building, <br>deployment, and governance of automated systems. <br>The Blueprint for an AI Bill of Rights is non-binding and does not constitute U.S. government policy. It <br>does not supersede, modify, or direct an interpretation of any existing statute, regulation, policy, or <br>international instrument. It does not constitute binding guidance for the public or Federal agencies and <br>therefore does not require compliance with the principles described herein. It also is not determinative of what <br>the U.S. government’s position will be in any international negotiation. Adoption of these principles may not <br>meet the requirements of existing statutes, regulations, policies, or international instruments, or the <br>requirements of the Federal agencies that enforce them. These principles are not intended to, and do not, <br>prohibit or limit any lawful activity of a government agency, including law enforcement, national security, or <br>intelligence activities. <br>The appropriate application of the principles set forth in this white paper depends significantly on the <br>context in which automated systems are being utilized. In some circumstances, application of these principles <br>in whole or in part may not be appropriate given the intended use of automated systems to achieve government <br>agency missions. Future sector-specific guidance will likely be necessary and important for guiding the use of <br>automated systems in certain settings such as AI systems used as part of school building security or automated <br>health diagnostic systems. <br>The Blueprint for an AI Bill of Rights recognizes that law enforcement activities require a balancing of <br>equities, for example, between the protection of sensitive law enforcement information and the principle of <br>notice; as such, notice may not be appropriate, or may need to be adjusted to protect sources, methods, and <br>other law enforcement equities. Even in contexts where these principles may not apply in whole or in part, <br>federal departments and agencies remain subject to judicial, privacy, and civil liberties oversight as well as <br>existing policies and safeguards that govern automated systems, including, for example, Executive Order 13960, <br>Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government (December 2020). <br>This white paper recognizes that national security (which includes certain law enforcement and <br>homeland security activities) and defense activities are of increased sensitivity and interest to our nation’s <br>adversaries and are often subject to special requirements, such as those governing classified information and <br>other protected data. Such activities require alternative, compatible safeguards through existing policies that <br>govern automated systems and AI, such as the Department of Defense (DOD) AI Ethical Principles and <br>Responsible AI Implementation Pathway and the Intelligence Community (IC) AI Ethics Principles and <br>Framework. The implementation of these policies to national security and defense activities can be informed by <br>the Blueprint for an AI Bill of Rights where feasible.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 5
- `per_device_eval_batch_size`: 5
- `num_train_epochs`: 2
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 5
- `per_device_eval_batch_size`: 5
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | cosine_map@100 |
|:------:|:----:|:--------------:|
| 1.0 | 45 | 0.8179 |
| 1.1111 | 50 | 0.8318 |
| 2.0 | 90 | 0.8349 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.1.1
- Transformers: 4.44.2
- PyTorch: 2.4.1+cu121
- Accelerate: 0.34.2
- Datasets: 3.0.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
MayBashendy/ArabicNewSplits7_FineTuningAraBERT_run3_AugV5_k16_task1_organization | MayBashendy | "2024-12-30T20:00:40Z" | 179 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-12-30T19:52:51Z" | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_FineTuningAraBERT_run3_AugV5_k16_task1_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_FineTuningAraBERT_run3_AugV5_k16_task1_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8256
- Qwk: 0.6950
- Mse: 0.8256
- Rmse: 0.9086
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 0.0274 | 2 | 7.0431 | 0.0056 | 7.0431 | 2.6539 |
| No log | 0.0548 | 4 | 4.3669 | 0.0928 | 4.3669 | 2.0897 |
| No log | 0.0822 | 6 | 3.1837 | 0.0714 | 3.1837 | 1.7843 |
| No log | 0.1096 | 8 | 2.6432 | 0.0755 | 2.6432 | 1.6258 |
| No log | 0.1370 | 10 | 2.2348 | 0.1727 | 2.2348 | 1.4949 |
| No log | 0.1644 | 12 | 1.8249 | 0.2520 | 1.8249 | 1.3509 |
| No log | 0.1918 | 14 | 1.7978 | 0.3817 | 1.7978 | 1.3408 |
| No log | 0.2192 | 16 | 1.5800 | 0.3306 | 1.5800 | 1.2570 |
| No log | 0.2466 | 18 | 1.3791 | 0.3036 | 1.3791 | 1.1744 |
| No log | 0.2740 | 20 | 1.2955 | 0.2545 | 1.2955 | 1.1382 |
| No log | 0.3014 | 22 | 1.2538 | 0.3304 | 1.2538 | 1.1197 |
| No log | 0.3288 | 24 | 1.2998 | 0.3932 | 1.2998 | 1.1401 |
| No log | 0.3562 | 26 | 1.2076 | 0.4370 | 1.2076 | 1.0989 |
| No log | 0.3836 | 28 | 1.1033 | 0.4833 | 1.1033 | 1.0504 |
| No log | 0.4110 | 30 | 1.1703 | 0.4463 | 1.1703 | 1.0818 |
| No log | 0.4384 | 32 | 1.1029 | 0.4667 | 1.1029 | 1.0502 |
| No log | 0.4658 | 34 | 1.0821 | 0.5042 | 1.0821 | 1.0402 |
| No log | 0.4932 | 36 | 1.2980 | 0.4348 | 1.2980 | 1.1393 |
| No log | 0.5205 | 38 | 1.3862 | 0.4103 | 1.3862 | 1.1774 |
| No log | 0.5479 | 40 | 1.1398 | 0.5556 | 1.1398 | 1.0676 |
| No log | 0.5753 | 42 | 1.1428 | 0.5312 | 1.1428 | 1.0690 |
| No log | 0.6027 | 44 | 1.7016 | 0.3167 | 1.7016 | 1.3045 |
| No log | 0.6301 | 46 | 1.6571 | 0.3471 | 1.6571 | 1.2873 |
| No log | 0.6575 | 48 | 1.1821 | 0.5303 | 1.1821 | 1.0873 |
| No log | 0.6849 | 50 | 1.1517 | 0.5481 | 1.1517 | 1.0732 |
| No log | 0.7123 | 52 | 1.0478 | 0.6119 | 1.0478 | 1.0236 |
| No log | 0.7397 | 54 | 1.2679 | 0.5649 | 1.2679 | 1.1260 |
| No log | 0.7671 | 56 | 1.2914 | 0.5197 | 1.2914 | 1.1364 |
| No log | 0.7945 | 58 | 1.3278 | 0.3871 | 1.3278 | 1.1523 |
| No log | 0.8219 | 60 | 1.7051 | 0.4380 | 1.7051 | 1.3058 |
| No log | 0.8493 | 62 | 1.6260 | 0.4265 | 1.6260 | 1.2751 |
| No log | 0.8767 | 64 | 1.2046 | 0.3478 | 1.2046 | 1.0976 |
| No log | 0.9041 | 66 | 1.1374 | 0.5669 | 1.1374 | 1.0665 |
| No log | 0.9315 | 68 | 1.1177 | 0.5954 | 1.1177 | 1.0572 |
| No log | 0.9589 | 70 | 1.3061 | 0.5294 | 1.3061 | 1.1428 |
| No log | 0.9863 | 72 | 1.2864 | 0.5294 | 1.2864 | 1.1342 |
| No log | 1.0137 | 74 | 0.9919 | 0.6475 | 0.9919 | 0.9959 |
| No log | 1.0411 | 76 | 0.8733 | 0.6993 | 0.8733 | 0.9345 |
| No log | 1.0685 | 78 | 0.8155 | 0.7347 | 0.8155 | 0.9030 |
| No log | 1.0959 | 80 | 0.8477 | 0.6892 | 0.8477 | 0.9207 |
| No log | 1.1233 | 82 | 0.7611 | 0.7297 | 0.7611 | 0.8724 |
| No log | 1.1507 | 84 | 0.7094 | 0.7143 | 0.7094 | 0.8423 |
| No log | 1.1781 | 86 | 0.7211 | 0.7 | 0.7211 | 0.8492 |
| No log | 1.2055 | 88 | 0.7604 | 0.6809 | 0.7604 | 0.8720 |
| No log | 1.2329 | 90 | 0.7188 | 0.7299 | 0.7188 | 0.8478 |
| No log | 1.2603 | 92 | 0.8607 | 0.6901 | 0.8607 | 0.9277 |
| No log | 1.2877 | 94 | 0.8346 | 0.6715 | 0.8346 | 0.9135 |
| No log | 1.3151 | 96 | 0.7222 | 0.7429 | 0.7222 | 0.8498 |
| No log | 1.3425 | 98 | 0.9110 | 0.6618 | 0.9110 | 0.9545 |
| No log | 1.3699 | 100 | 1.1408 | 0.5303 | 1.1408 | 1.0681 |
| No log | 1.3973 | 102 | 1.0023 | 0.6269 | 1.0023 | 1.0012 |
| No log | 1.4247 | 104 | 0.7732 | 0.6950 | 0.7732 | 0.8793 |
| No log | 1.4521 | 106 | 0.7295 | 0.7101 | 0.7295 | 0.8541 |
| No log | 1.4795 | 108 | 0.7223 | 0.7153 | 0.7223 | 0.8499 |
| No log | 1.5068 | 110 | 0.7749 | 0.6912 | 0.7749 | 0.8803 |
| No log | 1.5342 | 112 | 0.9451 | 0.6087 | 0.9451 | 0.9722 |
| No log | 1.5616 | 114 | 1.0232 | 0.5755 | 1.0232 | 1.0115 |
| No log | 1.5890 | 116 | 0.8088 | 0.6667 | 0.8088 | 0.8993 |
| No log | 1.6164 | 118 | 0.6835 | 0.7947 | 0.6835 | 0.8267 |
| No log | 1.6438 | 120 | 0.7809 | 0.6479 | 0.7809 | 0.8837 |
| No log | 1.6712 | 122 | 0.6720 | 0.7733 | 0.6720 | 0.8197 |
| No log | 1.6986 | 124 | 0.6575 | 0.7534 | 0.6575 | 0.8108 |
| No log | 1.7260 | 126 | 0.9456 | 0.6294 | 0.9456 | 0.9724 |
| No log | 1.7534 | 128 | 1.1491 | 0.5294 | 1.1491 | 1.0720 |
| No log | 1.7808 | 130 | 0.9491 | 0.6803 | 0.9491 | 0.9742 |
| No log | 1.8082 | 132 | 0.7182 | 0.7682 | 0.7182 | 0.8474 |
| No log | 1.8356 | 134 | 0.6909 | 0.7619 | 0.6909 | 0.8312 |
| No log | 1.8630 | 136 | 0.8196 | 0.7034 | 0.8196 | 0.9053 |
| No log | 1.8904 | 138 | 1.0296 | 0.5931 | 1.0296 | 1.0147 |
| No log | 1.9178 | 140 | 0.9796 | 0.6111 | 0.9796 | 0.9897 |
| No log | 1.9452 | 142 | 0.7842 | 0.6980 | 0.7842 | 0.8855 |
| No log | 1.9726 | 144 | 0.6653 | 0.7898 | 0.6653 | 0.8157 |
| No log | 2.0 | 146 | 0.7114 | 0.7792 | 0.7114 | 0.8435 |
| No log | 2.0274 | 148 | 0.7903 | 0.6806 | 0.7903 | 0.8890 |
| No log | 2.0548 | 150 | 0.8936 | 0.6573 | 0.8936 | 0.9453 |
| No log | 2.0822 | 152 | 0.8522 | 0.6479 | 0.8522 | 0.9231 |
| No log | 2.1096 | 154 | 0.7406 | 0.7383 | 0.7406 | 0.8606 |
| No log | 2.1370 | 156 | 0.6350 | 0.7722 | 0.6350 | 0.7969 |
| No log | 2.1644 | 158 | 0.5847 | 0.7848 | 0.5847 | 0.7647 |
| No log | 2.1918 | 160 | 0.5933 | 0.7771 | 0.5933 | 0.7702 |
| No log | 2.2192 | 162 | 0.6861 | 0.7333 | 0.6861 | 0.8283 |
| No log | 2.2466 | 164 | 0.6123 | 0.7692 | 0.6123 | 0.7825 |
| No log | 2.2740 | 166 | 0.5269 | 0.8101 | 0.5269 | 0.7259 |
| No log | 2.3014 | 168 | 0.5405 | 0.8176 | 0.5405 | 0.7352 |
| No log | 2.3288 | 170 | 0.6248 | 0.8075 | 0.6248 | 0.7905 |
| No log | 2.3562 | 172 | 0.7162 | 0.7733 | 0.7162 | 0.8463 |
| No log | 2.3836 | 174 | 0.7041 | 0.7785 | 0.7041 | 0.8391 |
| No log | 2.4110 | 176 | 0.7230 | 0.7733 | 0.7230 | 0.8503 |
| No log | 2.4384 | 178 | 0.7211 | 0.7613 | 0.7211 | 0.8492 |
| No log | 2.4658 | 180 | 0.7449 | 0.7484 | 0.7449 | 0.8631 |
| No log | 2.4932 | 182 | 0.6857 | 0.8075 | 0.6857 | 0.8281 |
| No log | 2.5205 | 184 | 0.5876 | 0.8098 | 0.5876 | 0.7665 |
| No log | 2.5479 | 186 | 0.5422 | 0.8050 | 0.5422 | 0.7364 |
| No log | 2.5753 | 188 | 0.5135 | 0.8171 | 0.5135 | 0.7166 |
| No log | 2.6027 | 190 | 0.5349 | 0.8171 | 0.5349 | 0.7313 |
| No log | 2.6301 | 192 | 0.7738 | 0.7636 | 0.7738 | 0.8797 |
| No log | 2.6575 | 194 | 0.8199 | 0.7239 | 0.8199 | 0.9055 |
| No log | 2.6849 | 196 | 0.5951 | 0.8075 | 0.5951 | 0.7714 |
| No log | 2.7123 | 198 | 0.6109 | 0.7815 | 0.6109 | 0.7816 |
| No log | 2.7397 | 200 | 0.5993 | 0.8 | 0.5993 | 0.7741 |
| No log | 2.7671 | 202 | 0.6037 | 0.8148 | 0.6037 | 0.7770 |
| No log | 2.7945 | 204 | 0.6038 | 0.825 | 0.6038 | 0.7770 |
| No log | 2.8219 | 206 | 0.6223 | 0.7895 | 0.6223 | 0.7889 |
| No log | 2.8493 | 208 | 0.7066 | 0.7383 | 0.7066 | 0.8406 |
| No log | 2.8767 | 210 | 0.7213 | 0.7483 | 0.7213 | 0.8493 |
| No log | 2.9041 | 212 | 0.6289 | 0.7785 | 0.6289 | 0.7931 |
| No log | 2.9315 | 214 | 0.6083 | 0.8258 | 0.6083 | 0.7799 |
| No log | 2.9589 | 216 | 0.7126 | 0.7607 | 0.7126 | 0.8442 |
| No log | 2.9863 | 218 | 0.6582 | 0.8072 | 0.6582 | 0.8113 |
| No log | 3.0137 | 220 | 0.5217 | 0.825 | 0.5217 | 0.7223 |
| No log | 3.0411 | 222 | 0.6605 | 0.7432 | 0.6605 | 0.8127 |
| No log | 3.0685 | 224 | 0.8032 | 0.6849 | 0.8032 | 0.8962 |
| No log | 3.0959 | 226 | 0.6903 | 0.7413 | 0.6903 | 0.8308 |
| No log | 3.1233 | 228 | 0.5233 | 0.8280 | 0.5233 | 0.7234 |
| No log | 3.1507 | 230 | 0.6553 | 0.8075 | 0.6553 | 0.8095 |
| No log | 3.1781 | 232 | 0.7168 | 0.7952 | 0.7168 | 0.8466 |
| No log | 3.2055 | 234 | 0.6312 | 0.8 | 0.6312 | 0.7945 |
| No log | 3.2329 | 236 | 0.6132 | 0.8105 | 0.6132 | 0.7831 |
| No log | 3.2603 | 238 | 0.6012 | 0.7974 | 0.6012 | 0.7754 |
| No log | 3.2877 | 240 | 0.5718 | 0.8 | 0.5718 | 0.7562 |
| No log | 3.3151 | 242 | 0.5427 | 0.8075 | 0.5427 | 0.7367 |
| No log | 3.3425 | 244 | 0.5613 | 0.8176 | 0.5613 | 0.7492 |
| No log | 3.3699 | 246 | 0.5795 | 0.8077 | 0.5795 | 0.7612 |
| No log | 3.3973 | 248 | 0.6018 | 0.7843 | 0.6018 | 0.7758 |
| No log | 3.4247 | 250 | 0.6137 | 0.7682 | 0.6137 | 0.7834 |
| No log | 3.4521 | 252 | 0.6329 | 0.7974 | 0.6329 | 0.7955 |
| No log | 3.4795 | 254 | 0.6627 | 0.8026 | 0.6627 | 0.8141 |
| No log | 3.5068 | 256 | 0.6493 | 0.7947 | 0.6493 | 0.8058 |
| No log | 3.5342 | 258 | 0.6339 | 0.8182 | 0.6339 | 0.7962 |
| No log | 3.5616 | 260 | 0.6422 | 0.7895 | 0.6422 | 0.8014 |
| No log | 3.5890 | 262 | 0.6237 | 0.8026 | 0.6237 | 0.7898 |
| No log | 3.6164 | 264 | 0.6231 | 0.7895 | 0.6231 | 0.7894 |
| No log | 3.6438 | 266 | 0.6553 | 0.76 | 0.6553 | 0.8095 |
| No log | 3.6712 | 268 | 0.6692 | 0.7483 | 0.6692 | 0.8181 |
| No log | 3.6986 | 270 | 0.6702 | 0.7947 | 0.6702 | 0.8187 |
| No log | 3.7260 | 272 | 0.7469 | 0.6883 | 0.7469 | 0.8642 |
| No log | 3.7534 | 274 | 0.6753 | 0.7875 | 0.6753 | 0.8218 |
| No log | 3.7808 | 276 | 0.5386 | 0.8077 | 0.5386 | 0.7339 |
| No log | 3.8082 | 278 | 0.6525 | 0.7848 | 0.6525 | 0.8078 |
| No log | 3.8356 | 280 | 0.8201 | 0.6267 | 0.8201 | 0.9056 |
| No log | 3.8630 | 282 | 0.7912 | 0.6939 | 0.7912 | 0.8895 |
| No log | 3.8904 | 284 | 0.6844 | 0.7871 | 0.6844 | 0.8273 |
| No log | 3.9178 | 286 | 0.6789 | 0.8 | 0.6789 | 0.8240 |
| No log | 3.9452 | 288 | 0.6768 | 0.8 | 0.6768 | 0.8227 |
| No log | 3.9726 | 290 | 0.7593 | 0.7211 | 0.7593 | 0.8714 |
| No log | 4.0 | 292 | 1.0558 | 0.5541 | 1.0558 | 1.0275 |
| No log | 4.0274 | 294 | 1.2961 | 0.5405 | 1.2961 | 1.1385 |
| No log | 4.0548 | 296 | 1.2540 | 0.5442 | 1.2540 | 1.1198 |
| No log | 4.0822 | 298 | 1.0963 | 0.5616 | 1.0963 | 1.0470 |
| No log | 4.1096 | 300 | 0.8816 | 0.6712 | 0.8816 | 0.9390 |
| No log | 4.1370 | 302 | 0.7760 | 0.7383 | 0.7760 | 0.8809 |
| No log | 4.1644 | 304 | 0.7783 | 0.7172 | 0.7783 | 0.8822 |
| No log | 4.1918 | 306 | 0.8100 | 0.7123 | 0.8100 | 0.9000 |
| No log | 4.2192 | 308 | 0.8773 | 0.6571 | 0.8773 | 0.9366 |
| No log | 4.2466 | 310 | 0.8769 | 0.6571 | 0.8769 | 0.9364 |
| No log | 4.2740 | 312 | 0.8306 | 0.7092 | 0.8306 | 0.9114 |
| No log | 4.3014 | 314 | 0.8273 | 0.6667 | 0.8273 | 0.9096 |
| No log | 4.3288 | 316 | 0.7851 | 0.7324 | 0.7851 | 0.8860 |
| No log | 4.3562 | 318 | 0.7485 | 0.7092 | 0.7485 | 0.8651 |
| No log | 4.3836 | 320 | 0.7008 | 0.75 | 0.7008 | 0.8371 |
| No log | 4.4110 | 322 | 0.5845 | 0.76 | 0.5845 | 0.7645 |
| No log | 4.4384 | 324 | 0.5249 | 0.8 | 0.5249 | 0.7245 |
| No log | 4.4658 | 326 | 0.5168 | 0.8 | 0.5168 | 0.7189 |
| No log | 4.4932 | 328 | 0.5405 | 0.7755 | 0.5405 | 0.7352 |
| No log | 4.5205 | 330 | 0.5627 | 0.7586 | 0.5627 | 0.7502 |
| No log | 4.5479 | 332 | 0.6468 | 0.7552 | 0.6468 | 0.8042 |
| No log | 4.5753 | 334 | 0.6926 | 0.7143 | 0.6926 | 0.8322 |
| No log | 4.6027 | 336 | 0.6122 | 0.7606 | 0.6122 | 0.7824 |
| No log | 4.6301 | 338 | 0.5782 | 0.7919 | 0.5782 | 0.7604 |
| No log | 4.6575 | 340 | 0.7068 | 0.7848 | 0.7068 | 0.8407 |
| No log | 4.6849 | 342 | 0.7233 | 0.7564 | 0.7233 | 0.8504 |
| No log | 4.7123 | 344 | 0.6049 | 0.7898 | 0.6049 | 0.7777 |
| No log | 4.7397 | 346 | 0.5422 | 0.7867 | 0.5422 | 0.7364 |
| No log | 4.7671 | 348 | 0.6424 | 0.75 | 0.6424 | 0.8015 |
| No log | 4.7945 | 350 | 0.7166 | 0.7448 | 0.7166 | 0.8465 |
| No log | 4.8219 | 352 | 0.7098 | 0.7413 | 0.7098 | 0.8425 |
| No log | 4.8493 | 354 | 0.6833 | 0.7445 | 0.6833 | 0.8266 |
| No log | 4.8767 | 356 | 0.7077 | 0.7571 | 0.7077 | 0.8413 |
| No log | 4.9041 | 358 | 0.6447 | 0.7552 | 0.6447 | 0.8029 |
| No log | 4.9315 | 360 | 0.5981 | 0.7671 | 0.5981 | 0.7734 |
| No log | 4.9589 | 362 | 0.5545 | 0.7838 | 0.5545 | 0.7446 |
| No log | 4.9863 | 364 | 0.5253 | 0.8 | 0.5253 | 0.7247 |
| No log | 5.0137 | 366 | 0.5314 | 0.7879 | 0.5314 | 0.7290 |
| No log | 5.0411 | 368 | 0.6354 | 0.7826 | 0.6354 | 0.7971 |
| No log | 5.0685 | 370 | 0.6247 | 0.7613 | 0.6247 | 0.7904 |
| No log | 5.0959 | 372 | 0.5664 | 0.7692 | 0.5664 | 0.7526 |
| No log | 5.1233 | 374 | 0.5871 | 0.8052 | 0.5871 | 0.7662 |
| No log | 5.1507 | 376 | 0.6560 | 0.7895 | 0.6560 | 0.8100 |
| No log | 5.1781 | 378 | 0.6784 | 0.7582 | 0.6784 | 0.8236 |
| No log | 5.2055 | 380 | 0.6308 | 0.7821 | 0.6308 | 0.7942 |
| No log | 5.2329 | 382 | 0.6265 | 0.8052 | 0.6265 | 0.7915 |
| No log | 5.2603 | 384 | 0.6216 | 0.7742 | 0.6216 | 0.7884 |
| No log | 5.2877 | 386 | 0.6057 | 0.7843 | 0.6057 | 0.7783 |
| No log | 5.3151 | 388 | 0.6044 | 0.7792 | 0.6044 | 0.7774 |
| No log | 5.3425 | 390 | 0.5927 | 0.8077 | 0.5927 | 0.7699 |
| No log | 5.3699 | 392 | 0.5994 | 0.8025 | 0.5994 | 0.7742 |
| No log | 5.3973 | 394 | 0.5892 | 0.8026 | 0.5892 | 0.7676 |
| No log | 5.4247 | 396 | 0.6084 | 0.7843 | 0.6084 | 0.7800 |
| No log | 5.4521 | 398 | 0.6762 | 0.7568 | 0.6762 | 0.8223 |
| No log | 5.4795 | 400 | 0.6952 | 0.7619 | 0.6952 | 0.8338 |
| No log | 5.5068 | 402 | 0.6973 | 0.7619 | 0.6973 | 0.8351 |
| No log | 5.5342 | 404 | 0.6834 | 0.7682 | 0.6834 | 0.8267 |
| No log | 5.5616 | 406 | 0.6947 | 0.7867 | 0.6947 | 0.8335 |
| No log | 5.5890 | 408 | 0.7080 | 0.7867 | 0.7080 | 0.8414 |
| No log | 5.6164 | 410 | 0.6883 | 0.7682 | 0.6883 | 0.8296 |
| No log | 5.6438 | 412 | 0.7004 | 0.7742 | 0.7004 | 0.8369 |
| No log | 5.6712 | 414 | 0.7972 | 0.7215 | 0.7972 | 0.8929 |
| No log | 5.6986 | 416 | 0.7759 | 0.7595 | 0.7759 | 0.8809 |
| No log | 5.7260 | 418 | 0.6785 | 0.7922 | 0.6785 | 0.8237 |
| No log | 5.7534 | 420 | 0.6459 | 0.7843 | 0.6459 | 0.8037 |
| No log | 5.7808 | 422 | 0.6896 | 0.7901 | 0.6896 | 0.8304 |
| No log | 5.8082 | 424 | 0.7433 | 0.75 | 0.7433 | 0.8621 |
| No log | 5.8356 | 426 | 0.7004 | 0.7898 | 0.7004 | 0.8369 |
| No log | 5.8630 | 428 | 0.6296 | 0.7682 | 0.6296 | 0.7935 |
| No log | 5.8904 | 430 | 0.6418 | 0.7843 | 0.6418 | 0.8011 |
| No log | 5.9178 | 432 | 0.6573 | 0.7730 | 0.6573 | 0.8107 |
| No log | 5.9452 | 434 | 0.5796 | 0.8050 | 0.5796 | 0.7613 |
| No log | 5.9726 | 436 | 0.5471 | 0.8049 | 0.5471 | 0.7397 |
| No log | 6.0 | 438 | 0.6904 | 0.7927 | 0.6904 | 0.8309 |
| No log | 6.0274 | 440 | 0.7915 | 0.7702 | 0.7915 | 0.8897 |
| No log | 6.0548 | 442 | 0.6714 | 0.775 | 0.6714 | 0.8194 |
| No log | 6.0822 | 444 | 0.5868 | 0.8129 | 0.5868 | 0.7660 |
| No log | 6.1096 | 446 | 0.6259 | 0.7843 | 0.6259 | 0.7911 |
| No log | 6.1370 | 448 | 0.6773 | 0.7643 | 0.6773 | 0.8230 |
| No log | 6.1644 | 450 | 0.6834 | 0.7643 | 0.6834 | 0.8267 |
| No log | 6.1918 | 452 | 0.6998 | 0.7564 | 0.6998 | 0.8366 |
| No log | 6.2192 | 454 | 0.7043 | 0.7712 | 0.7043 | 0.8392 |
| No log | 6.2466 | 456 | 0.7105 | 0.7712 | 0.7105 | 0.8429 |
| No log | 6.2740 | 458 | 0.7139 | 0.7383 | 0.7139 | 0.8449 |
| No log | 6.3014 | 460 | 0.7024 | 0.7226 | 0.7024 | 0.8381 |
| No log | 6.3288 | 462 | 0.6602 | 0.7389 | 0.6602 | 0.8125 |
| No log | 6.3562 | 464 | 0.6182 | 0.7875 | 0.6182 | 0.7862 |
| No log | 6.3836 | 466 | 0.6473 | 0.8293 | 0.6473 | 0.8046 |
| No log | 6.4110 | 468 | 0.7314 | 0.7975 | 0.7314 | 0.8552 |
| No log | 6.4384 | 470 | 0.7562 | 0.775 | 0.7562 | 0.8696 |
| No log | 6.4658 | 472 | 0.7018 | 0.8221 | 0.7018 | 0.8377 |
| No log | 6.4932 | 474 | 0.6550 | 0.8176 | 0.6550 | 0.8093 |
| No log | 6.5205 | 476 | 0.6651 | 0.7742 | 0.6651 | 0.8155 |
| No log | 6.5479 | 478 | 0.6550 | 0.7742 | 0.6550 | 0.8093 |
| No log | 6.5753 | 480 | 0.6191 | 0.8125 | 0.6191 | 0.7868 |
| No log | 6.6027 | 482 | 0.6399 | 0.8364 | 0.6399 | 0.7999 |
| No log | 6.6301 | 484 | 0.6885 | 0.7901 | 0.6885 | 0.8297 |
| No log | 6.6575 | 486 | 0.7149 | 0.7771 | 0.7149 | 0.8455 |
| No log | 6.6849 | 488 | 0.6728 | 0.8 | 0.6728 | 0.8202 |
| No log | 6.7123 | 490 | 0.6123 | 0.8077 | 0.6123 | 0.7825 |
| No log | 6.7397 | 492 | 0.5945 | 0.8075 | 0.5945 | 0.7710 |
| No log | 6.7671 | 494 | 0.6629 | 0.7407 | 0.6629 | 0.8142 |
| No log | 6.7945 | 496 | 0.6575 | 0.7735 | 0.6575 | 0.8109 |
| No log | 6.8219 | 498 | 0.6159 | 0.8098 | 0.6159 | 0.7848 |
| 0.3908 | 6.8493 | 500 | 0.6206 | 0.825 | 0.6206 | 0.7878 |
| 0.3908 | 6.8767 | 502 | 0.6367 | 0.825 | 0.6367 | 0.7979 |
| 0.3908 | 6.9041 | 504 | 0.6418 | 0.8079 | 0.6418 | 0.8012 |
| 0.3908 | 6.9315 | 506 | 0.7221 | 0.7361 | 0.7221 | 0.8498 |
| 0.3908 | 6.9589 | 508 | 0.8417 | 0.6950 | 0.8417 | 0.9175 |
| 0.3908 | 6.9863 | 510 | 0.8256 | 0.6950 | 0.8256 | 0.9086 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
m-newhauser/setfit-ml-jobs | m-newhauser | "2024-06-01T09:11:10Z" | 7 | 0 | setfit | [
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"region:us"
] | text-classification | "2024-05-31T11:48:32Z" | ---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
base_model: sentence-transformers/paraphrase-mpnet-base-v2
metrics:
- accuracy
widget:
- text: 'Senior Software Engineer (Ref #001135) - At Wells Fargo, we want to satisfy
our customers'' financial needs and help them succeed financially. We''re looking
for talented people who will put our customers at the center of everything we
do. Join our diverse and inclusive team where you''ll feel valued and inspired
to contribute your unique skills and experience.
Help us build a better Wells Fargo. It all begins with outstanding talent. It
all begins with you.
Wells Fargo Technology sets IT strategy; enhances the design, development, and
operations of our systems; optimizes the Wells Fargo infrastructure footprint;
provides information security; and enables continuous banking access through in-store,
online, ATM, and other channels to Wells Fargo''s more than 70 million global
customers.
Wells Fargo Bank N.A. seeks a Senior Software Engineer in Charlotte, NC.
Job Role and Responsibility: Drive the development and delivery for automating
application modules. Responsible for coordinating the implementation of the FLEXCUBE
Global Branch Foundations'' Account Services and Treasury Management Platform
for International Operations. Provide Subject Matter Expertise in the development
and design of architecture and business functions for FLEXCUBE Global Branch Foundation
and Treasury Management. Support the Configuration, Testing, Data Migration design
and Production Install activities. Work with partners to establish and maintain
a consistent automation methodology, including for end-to-end testing. Identify,
evaluate and resolve highly complex automation issues. Partner with stakeholders
both within and outside of technology on assigned projects. Telecommuting is permitted
up to 2 days a week. Position must appear in person to the location listed as
the work address.
Travel required: None.
Required Qualifications: Position requires a Bachelor''s degree in Applied Computer
Science, Computer Application, Computer Information Systems, Computer Science,
Computer Engineering, or related technical field plus Eight (8) years of experience
in the job offered or in a related position involving software development.
Specific Skills Required
5 years of PL/SQL experience5 years of experience designing, developing, and implementing
Oracle database solutions5 years of financial industry experience5 years of experience
integrating or supporting Oracle''s Flexcube banking application5 years of DevOps
experience5 years of experience with SeleniumExperience with international banking
branch products and processesExperience with JavaExperience with JunitExperience
with MicroservicesExperience with Performance TestingExperience with SwiftExperience
with GIT/JenkinsExperience with SOAP and REST web servicesExperience with Spring
FrameworkExperience with CucumberExperience with XML.
Qualified applicants send resume to: [email protected] and reference
Requisition #001135 in the subject line.
Posting End Date
2 May 2024
Job posting may come down early due to volume of applicants.
We Value Diversity
At Wells Fargo, we believe in diversity, equity and inclusion in the workplace;
accordingly, we welcome applications for employment from all qualified candidates,
regardless of race, color, gender, national origin, religion, age, sexual orientation,
gender identity, gender expression, genetic information, individuals with disabilities,
pregnancy, marital status, status as a protected veteran or any other status protected
by applicable law.
Employees support our focus on building strong customer relationships balanced
with a strong risk mitigating and compliance-driven culture which firmly establishes
those disciplines as critical to the success of our customers and company. They
are accountable for execution of all applicable risk programs (Credit, Market,
Financial Crimes, Operational, Regulatory Compliance), which includes effectively
following and adhering to applicable Wells Fargo policies and procedures, appropriately
fulfilling risk and compliance obligations, timely and effective escalation and
remediation of issues, and making sound risk decisions. There is emphasis on proactive
monitoring, governance, risk identification and escalation, as well as making
sound risk decisions commensurate with the business unit''s risk appetite and
all risk and compliance program requirements.
Candidates applying to job openings posted in US: All qualified applicants will
receive consideration for employment without regard to race, color, religion,
sex, sexual orientation, gender identity, national origin, disability, status
as a protected veteran, or any other legally protected characteristic.
Candidates applying to job openings posted in Canada: Applications for employment
are encouraged from all qualified candidates, including women, persons with disabilities,
aboriginal peoples and visible minorities. Accommodation for applicants with disabilities
is available upon request in connection with the recruitment process.
Applicants With Disabilities
To request a medical accommodation during the application or interview process,
visit Disability Inclusion at Wells Fargo .
Drug and Alcohol Policy
Wells Fargo maintains a drug free workplace. Please see our Drug and Alcohol Policy
to learn more.
Reference Number
R-361641,'
- text: 'Data Scientist - AI Investment - Are you interested in revolutionising the
future of AI investment?
My client is looking for a data scientist to tackle intricate business challenges
through advanced analytics and machine learning techniques.
You will take charge of both technical prowess, overseeing the creation, implementation,
and upkeep of sophisticated machine learning models and algorithms, including
extensive language models.
This role offers an exceptional chance to make a substantial impact and establish
yourself as a visionary in the realms of data science and AI.
Responsibilities:You''ll spearhead the development and implementation of groundbreaking
AI and data science solutions.Steering the strategic path of the data science
community, remaining at the forefront of applied AI and AI research.Effectively
communicating with stakeholders and influencing decision-making.Overseeing project
delivery from inception to deployment, ensuring alignment with business goals.Identifying
and integrating state-of-the-art technologies, tools, and methodologies to drive
value through cost reduction, revenue generation, or enhanced customer experience.
Requirements:Proven AI research in finance industry. Ideally published with multiple
citations. Ph.D./Masters/Bachelor''s degree in computer science, mathematics,
statistics, engineering, or relevant field from a top 10 university in the US
or equivalent. Proficiency in key data science tools and methodologies, including
Python, PyTorch, TensorFlow, Jax, Numpy, Scikit-learn, time-series forecasting,
classification, regression, large-language models, and experiment design.A commitment
to staying abreast of the latest advancements in AI research and a drive to continuously
push boundaries.Extensive relevant work experience, encompassing a solid grasp
of statistical data analysis, machine learning algorithms, and deep learning frameworks.
Join my client on this thrilling journey and contribute to shaping the future
of data science and AI in the investment sector.,'
- text: "Accenture Global Alliance Leader - Our Company\n\nAt Teradata, we believe\
\ that people thrive when empowered with better information. That’s why we built\
\ the most complete cloud analytics and data platform for AI. By delivering harmonized\
\ data, trusted AI, and faster innovation, we uplift and empower our customers—and\
\ our customers’ customers—to make better, more confident decisions. The world’s\
\ top companies across every major industry trust Teradata to improve business\
\ performance, enrich customer experiences, and fully integrate data across the\
\ enterprise. \n\nWhat You'll Do\n\nAt Teradata, we are leading the data & AI\
\ era. As enterprises address today’s digital economy, they are faced with new\
\ competition and consumer expectations and are turning to data to power their\
\ future. Teradata has worked with the largest companies in the world for 40+\
\ years, bringing our experience and expertise to support global enterprises with\
\ their most demanding, mission-critical, complex, and large-scale data needs.\
\ Teradata is recognized as a leader in the cloud, data, AI and analytics spaces\
\ by top analyst firms, Gartner and Forrester, and Fortune Magazine as well. \n\
\nOur connected multi-cloud data platform for enterprise analytics, Teradata Vantage™,\
\ is an extremely scalable, secure, and resilient offering that simplifies ecosystems\
\ by connecting data and making it easier to uncover insights from across the\
\ organization…regardless of where that data resides. With Vantage, we enable\
\ companies to modernize their data management, from start to scale. Every day,\
\ millions of users benefit from our open data platform. Empowering customers\
\ and partners to develop and build how they like, we enable hundreds of business\
\ outcomes and solutions, including improving customer experience and profitability,\
\ driving operational efficiency, realizing financial transformation, or achieving\
\ operational efficiency. \n\nAs the world of data grows, we are the leader in\
\ enabling the future of connected businesses, powered by data intelligence. We\
\ are committed to delivering on this vision by following sustainable business\
\ practices and with a strong focus on diversity, equity, and inclusion. We believe\
\ that only by embracing diversity of identity, thought, background, expression,\
\ and perspective can we solve today’s challenges and reimagine tomorrow’s world.\
\ \n\nThis is a unique opportunity to join our team in a period of fast growth\
\ and expansion. If you are interested in working in a startup like environment\
\ where you can directly influence the future of cloud-based analytics solutions\
\ and services, then Teradata is the place for you. You will join a team of business\
\ development professionals, cloud experts, and business builders to deploy cloud\
\ services and cloud native integrations that bring Teradata’s analytics capabilities\
\ to the public cloud platforms. \n\nTeradata is looking for an exceptional leader\
\ to grow our business with Accenture globally (75% of the time) as well as with\
\ other global SI (25%), with a specific focus on developing Teradata cloud-based\
\ offerings and executing strategic go-to-market initiatives that will help our\
\ customers achieve their digital future. \n\nThe primary focus for this role\
\ will be to own the success of our relationship with Accenture and the secondary\
\ focus will be to own the development of new alliances with other Global SI.\
\ Those activities will include program management, contract management, funnel\
\ management, governance, financial metrics and collaborate cross-functionally\
\ with product management on roadmap and supporting the GTM sales teams and regional\
\ partner organizations. The Global Alliance Leader is ultimately responsible\
\ for attainment of revenue objectives, growth targets as well as having matrix\
\ management responsibility for a cross functional sales, services and support\
\ organization aligned with annual and long-term strategies to develop, grow and\
\ maintain our Alliance with Accenture and other SI. The Global Alliance Leader\
\ will also develop and maintain strong collaborative relationships with our Teradata\
\ Engineering, Product Management, Marketing, Finance and Services teams in support\
\ of tight alignment between the SI and team Teradata. \n\nIf you are a high energy,\
\ collaborative individual who can manage multiple touch points, take ownership\
\ of the business and create strong relationships across the organization and\
\ work collaboratively to support the success of our business within Teradata\
\ and the Cloud organization, this is the role for you. \n\nApplicants should\
\ have a proven track record in general management of a Global Partner like Accenture.\
\ The scope of the role includes identification of new business opportunities\
\ with or through the partner, including growth within existing Teradata accounts,\
\ management of the relationships and sales engagement process to gain agreement\
\ with requisite stakeholders on execution. The applicant will own the relationship\
\ with the partner, the business development, support sales and maintain up to\
\ date best practices on partnership. The role will require exceptional strategy\
\ and interpersonal skills, solid understanding of the Cloud Provider technology\
\ & GTM, and a deep understanding of Teradata strategies and business model. This\
\ role will report to the VP for System Integrators and Advisors. \n\nResponsibilities\n\
\nDevelop, build, maintain and execute the global partner business plan providing\
\ focus in generating new revenue. Manage, foster and grow relationships at the\
\ executive level with high visibility across all industry sub-segments. Drive\
\ increased adoption of Teradata solutions with partners; facilitate communication,\
\ enablement, and resource alignment to accelerate growth. Create, develop and\
\ execute functional or sector specific initiatives in line with the partner’s\
\ and Teradata strategic objectives. Develop an in-depth understanding of the\
\ partner’s business objectives and effectively communicate and align with Teradata\
\ to drive incremental growth. Develop strong relations and advocates with key\
\ decision makers on the partner side as well as internally in Teradata to support\
\ growth of our business. Identify new business opportunities with or through\
\ the partners, develop plans for how Teradata can leverage these and gain agreement\
\ with requisite stakeholders on execution. Develop presentations and materials\
\ to support sales with and through partner. Support sales calls and consulting\
\ engagements where the partners are involved by maintain up to date best practices\
\ on partnership. Achievement of sales and business development objectives associated\
\ with driving revenue and new account development and related activities. \n\n\
The successful candidate will report directly to the VP, SIs and Advisors. The\
\ successful candidate will work from a Teradata facility or virtual and will\
\ be expected to travel on business (25-40% travel). \n\nWhat Makes You a Qualified\
\ Candidate\n\nA successful candidate should be a strategic thinker, self-starter\
\ who is creative and driven. The candidate must possess the ability to lead,\
\ advise and advocate for partners. The desired candidate should be innovative\
\ and skilled at seizing opportunities and transforming strategy into results.\
\ \n\nBS, MBA, or MS in business, technical or professional discipline or equivalent\
\ work experience. Highly seasoned professionals with significant experience leading\
\ multi-functional teams. Minimum of 10 years of general management / leadership\
\ experience and, ideally, similar tenure of global alliances management experience\
\ (training/development/performance management of sales/consulting team). Minimum\
\ of 5 years of hands-on, Major account, goal ownership experience. Must have\
\ the ability to frame ambiguous business opportunities, create structured business\
\ recommendations, adapt quickly based on senior stakeholder input and effectively\
\ communicate to internal & external leadership teams. \n\nWhat You'll Bring\n\
\nStrong understanding of and practical experience in alliance management specifically\
\ within high-tech at both a strategic, tactical, and operational level. Enterprise\
\ sales leadership experience Partner/customer cloud services strategy development\
\ experience Ability to analyse partner business model & financials to develop\
\ business models and new revenue streams. Demonstrated ability to motivate &\
\ drive sales results through a global v-team. High degree of Executive presence;\
\ proven ability and experience to operate effectively at senior management and\
\ C-executive levels. Demonstrated ability to quickly gain trust and credibility.\
\ Strong Interpersonal and communication skills. Ability to deliver effective\
\ diplomatic communications as well as handle sensitive information and materials\
\ in a confidential manner. Technical background in databases, enterprise software,\
\ and current knowledge of Teradata solutions is desired. Passion to win, entrepreneurial,\
\ high energy, collaborative, responsive, and results-driven self-starter. Ability\
\ to deal well with change and be a team player, builder, and leader. Excellent\
\ presentation skills and confidence \n\n\n\nWhy We Think You’ll Love Teradata\n\
\nWe prioritize a people-first culture because we know our people are at the very\
\ heart of our success. We embrace a flexible work model because we trust our\
\ people to make decisions about how, when, and where they work. We focus on well-being\
\ because we care about our people and their ability to thrive both personally\
\ and professionally. We are an anti-racist company because our dedication to\
\ Diversity, Equity, and Inclusion is more than a statement. It is a deep commitment\
\ to doing the work to foster an equitable environment that celebrates people\
\ for all of who they are.\n\nTeradata invites all identities and backgrounds\
\ in the workplace. We work with deliberation and intent to ensure we are cultivating\
\ collaboration and inclusivity across our global organization. We are proud to\
\ be an equal opportunity and affirmative action employer. We do not discriminate\
\ based upon race, color, ancestry, religion, creed, sex (including pregnancy,\
\ childbirth, breastfeeding, or related conditions), national origin, sexual orientation,\
\ age, citizenship, marital status, disability, medical condition, genetic information,\
\ gender identity or expression, military and veteran status, or any other legally\
\ protected status.,"
- text: "Manager - Branch 2 - R10041667 Manager - Branch 2 (Open)\n\nLocation:\n\n\
St. George, UT - Filling industrial\n\nHow will you CONTRIBUTE and GROW?\n\nPosition\
\ Title: Branch Manager\n\nSt. George, UT\n\nMinimal overnight travel for occasional\
\ training and meetings\n\nHow will you CONTRIBUTE and GROW?\n\nThe Branch Manager\
\ is responsible for all critical issues of the branch, including sales, sales\
\ growth, budgeting, gross margins, managing direct reports, and all operational\
\ expenses. Other key areas of responsibility include safety, customer service,\
\ inventory levels and resolution of personnel issues. Assist outside salespersons\
\ in preparing quotes, maintaining all required literature, providing customer\
\ follow-up, and gathering sales leads. Schedule and follow-up on deliveries,\
\ pre-call customers before scheduled deliveries. Control costs and monitor fiscal\
\ performance compared to budget. Maintain a store in a clean and orderly manner.\
\ Schedule staffing to ensure excellent customer service at all times.\n\nThis\
\ is a full time position with a full benefits package and a Monday through Friday\
\ workweek. Airgas values a great work life balance and has unlimited potential\
\ for career growth.\n\nIn particular, you will: \n\nEvaluate and monitor day-to-day\
\ activities of the branch to ensure cost effective operations and a safe and\
\ productive work environment.Establish and maintain clear and consistent lines\
\ of communication with direct reports and internal departments relative to customer\
\ successes, customer failures, new customer developments and other customer specific\
\ informationManages all branch personnel in accordance with company policies\
\ by hiring, training, motivating, planning and directing work. Provide performance\
\ feedback and development opportunities. Accurately complete and submit all sales-related\
\ paperwork (e.g., shippers, invoices, cylinder audits, month-end reports, cash\
\ reconciliations, deposits, etc.) in a timely mannerParticipates in the preparation\
\ of market and competitor information and annual sales analysis and forecast.Works\
\ in accordance with all policies and procedures and rules as prescribed by State,\
\ Federal and the Company. \n\n________________________\n\nAre you a MATCH?\n\n\
Are you a MATCH? \n\nPrevious management experience in a customer facing environmentAbility\
\ to lift up to 75 lbs and occasionally up to 125 lbs with the aid of material\
\ handling equipmentProficient computer skillsAbility to handle multiple tasks\
\ concurrentlyAbility to work independentlyGood communication skills (verbal and\
\ written)\n\nPreferred:\n\n3+ years’ experience in the welding or safety industry5+\
\ years’ experience in sales with proven success\n\n_________________________\n\
\nYour differences enhance our performance\n\nAt Airgas, we are committed to building\
\ a diverse and inclusive workplace that embraces the diversity of our employees,\
\ our customers, patients, community stakeholders and cultures across the world.\n\
\nWe welcome and consider applications from all qualified applicants, regardless\
\ of their race, gender, sexual orientation, religion, disability or any other\
\ protected characteristic. We strongly believe a diverse organization opens up\
\ opportunities for people to express their talent, both individually and collectively\
\ and it helps foster our ability to innovate by living our fundamentals, acting\
\ for our success and creating an engaging environment in a changing world.\n\n\
_________________________\n\nEqual Employment Opportunity Information\n\nWe are\
\ an equal opportunity employer. We evaluate qualified applicants without regard\
\ to race, color, religion, sex, sexual orientation, gender identity, national\
\ origin, disability, veteran status, or any other protected characteristic.\n\
\nPlease click here to view the EEO Know Your Rights poster and here to view the\
\ Pay Transparency Nondiscrimination poster. Airgas, an Air Liquide Company invites\
\ any applicant and/or employee to review the Company’s written Affirmative Action\
\ Plan or Policy Statement. This plan or policy statement is available for inspection\
\ upon request.\n\nAirgas, an Air Liquide Company and its group of companies does\
\ not discriminate against qualified applicants with disabilities and is committed\
\ to providing reasonable accommodations to the known disabilities of such individuals\
\ so as to ensure equal access to benefits and privileges of employment. If you\
\ are an individual with a disability and would like to request a reasonable accommodation\
\ as part of the employment selection process, please contact us by email at [email protected].\n\
\n_________________________\n\nCalifornia Privacy Notice,"
- text: "Sales Assistant Manager - Rent A Center\n\nReady to do your best work?\n\n\
Interested in a starting hourly rate up to $18?\n\nWhy should I apply in just\
\ a few clicks? \n\n Paid Time Off and Sundays Off -- We are Closed! Full-Time\
\ Employment and a Consistent Schedule Weekly Pay (companywide) Award Winning\
\ Culture with the Opportunity to AdvanceBonus potential for Assistant Managers\
\ and above Great BenefitsMedicalDentalVisionLife InsuranceSupplemental Life InsuranceSpouse/Dependent\
\ Life InsuranceShort Term DisabilityLong Term DisabilityFlexible Spending Accounts401(k)\
\ Savings Plan w/company matchPaid Time OffLegal InsuranceIdentity Theft Protection\
\ PlanHealth Savings AccountsHospital IndemnityCritical IllnessAccident InsuranceLimited\
\ Purpose Plan\nWhat will you do? Provide customers access to high-quality goods\
\ that enhance their quality of life. You will do meaningful work and make a difference\
\ in our customers' lives!\n\nA day in the life of a Sales Assistant Manager:\n\
\n Sales: Responsible for sales growth through completed rental agreements and\
\ prospecting new business and customers Customer Service: Provide friendly, top-notch\
\ customer experiences through \"white glove\" service with a servant's heart\
\ in our stores and in customer's homes Deliveries & Pickups: Opportunity to get\
\ out of the store and display a winning spirit through safe and compliant loading/unloading\
\ and installation of products, while following all handling and transportation\
\ procedures Merchandising: Maintain an inviting store with organized product\
\ and cleanliness with both customers and fellow coworkers in mind\n\nWhat are\
\ the minimum requirements?\n\n 1-3 years of retail/customer service, sales, or\
\ collections experience High school diploma or equivalent Must be at least 18\
\ years of age Valid state driver's license and good driving record -- You WILL\
\ be driving the company vehicles Ability to lift and move product such as furniture,\
\ electronics, and appliances Great communication and customer service skills\n\
\nWhat are some additional helpful traits?\n\n Seeking more than just a job, but\
\ a CAREER A desire to improve our customer's lives A hunger to learn the business\
\ Grit and determination \n\nThis is an excerpt from the full job description\
\ and is not intended to be all-inclusive. Other related duties may be required\
\ to meet the ongoing needs of the business. Rent-A-Center is committed to creating\
\ a diverse and inclusive work environment and is proud to be an equal opportunity\
\ employer.,"
pipeline_tag: text-classification
inference: true
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | <ul><li>"Driver Manager - Days - Driver Manager - Olathe, KS\n\nThe role | An Driver Manager is responsible for\n\nLearning the business from the ground up through our hands-on training program that includes exposure to driver and equipment management, customer service, and transportation logistics.Maintaining a high level of engagement and cultivating positive working relationships with your fleet of assigned drivers.Reviewing scheduled pick-up and delivery appointments daily.Confirming that each driver is fully informed of all customer and company expectations on every load at the point of dispatch.Coordinating drivers’ scheduled home time, downtime, current status, and predicted time available.Addressing safety non-conformance and violations/incidents.Escalate matters impacting on-time delivery to appropriate departments as they occur.\n\nCandidates should expect to spend the majority of their day making/receiving calls.\n\nThe requirements | This will be a perfect fit for you if...\n\nSelf-motivated with a desire to learn about the growing transportation industry.Strong multi-tasking ability.You can type at least 40-45 wpm (preferred).Computer proficient and able to navigate between multiple programs.Excellent written and oral communication skills.\n\nThe details | What are the hours, pay, and location? \n\nThis is a full-time, in-office position located in south Olathe, KS.Starting salary of $50,000 + Incentives Schedule: Must be Flexible 5:00a-5:00p.\nThe perks | What's in it for you?\n\nA casual-dress, smoke-free work environmentHealth, dental, life, and disability insurance coverage401(k) plan with company matchPaid time off and additional time each anniversaryDiscounted gym memberships, mobile phone services, tires\n\nJob Type: Full-time\n\nTransAm is committed to the principles of equal employment opportunity and nondiscrimination.,"</li><li>'Senior Scientist, In Vivo Ocular Pharmacology - Description:\n\nJohnson & Johnson is recruiting for a Senior Scientist, Specialty Ophthalmology Discovery located in Spring House PA to support drug discovery of novel therapeutics for retinal disease.\n\nAt Johnson & Johnson,\u202fwe believe health is everything. Our strength in healthcare innovation empowers us to build a\u202fworld where complex diseases are prevented, treated, and cured,\u202fwhere treatments are smarter and less invasive, and\u202fsolutions are personal. Through our expertise in Innovative Medicine and MedTech, we are uniquely positioned to innovate across the full spectrum of healthcare solutions today to deliver the breakthroughs of tomorrow, and profoundly impact health for humanity. Learn more at https://www.jnj.com/.\u202f\n\nWe are seeking a highly motivated and talented lab-based Senior Scientist to join our Retinal Diseases Discovery team located in Spring House, PA (USA). The successful candidate will join a dynamic, multi-disciplinary team of exploratory scientists and play a key role in the evaluation of new drug concepts using relevant model systems. He/she will use emerging developments in these fields to generate novel target ideas and provide technical and strategic input with the goal of progressing novel drug candidates into translational development. The qualified candidate will work in cross-functional teams, including translational medicine, discovery, biomarker teams, program project teams and disease-specific working groups, to shape a discovery and development strategy for novel drug candidates and will bring forward novel translational animal models and innovative technologies to support the discovery pipeline.\n\nThis role is laboratory-based and requires an established background in retinal cell biology, neuroscience, or metabolism. Responsibilities include the execution of discovery projects to develop new retinal disease therapies and expand the retina portfolio. The selected candidate will contribute to research programs through validation of new targets and design and execution of team-based research to help drive project advancements.\n\nCore Responsibilities\n\nDesign and conduct experiments to support new target validation, analysis, and interpretation of results, with focus on in vivo assays/models to support discovery and translational drug development programs.Establish robust discovery and/or pharmacology data packages aimed at the progression of our differentiated assets into the clinic.Evolve project strategy in collaboration with the core team and functional partners and present scientific progress to discovery translational science, governance and leadership teams.Establish and execute against timelines to enable project progression while working in a fast-paced and highly matrixed environment.Support collaborations with academic investigators, including advising on experimentation and analysis of results, and conducting complementary experiments.Contribute to the preparation and submission of technical reports, patent applications, and manuscripts as appropriate.Ensure compliance with all company training, documentation, and ensure safe laboratory working practices.\n\nQualifications:\n\n A Ph.D. degree in the biological sciences (or equivalent) with 1 year of post-doctoral experience in the pharmaceutical industry or academic environment, or B.S. or M.S. degree in the biological sciences with a minimum of 8 years of experience in relevant pharmaceutical industry setting.Demonstrated experience in vivo studies, especially creation and characterization of animal models of retinal disease. Expertise in performing pharmacological and mechanistic studies in models of eye/ocular disease is highly desired.Established background in cellular and molecular biology and competency with in vitro techniques is preferred.A background in ophthalmic drug discovery and translational models of ocular disease is preferred.Experience with validation of new target concepts is required, with keen knowledge of experimental design, underlying scientific and biological principles, and data analysis and interpretation (i.e., from hypothesis through planning and execution of experiments in support of retinal disease-related programs).Established record of scientific accomplishments directed toward the discovery and development of therapeutic agents, including strong publication record in journals, oral presentations within and outside industry, and participation in professional societies is required.Must be highly motivated, with excellent organizational skills, and capable of working collaboratively in a fast-paced and highly matrixed environment.Ability to forge and foster collaborations (internal and external), deliver scientific content to diverse audiences, and agility and adaptability working across multiple projects is highly desirable.\n\nThe base pay range for this position is $104,000 to $166,750.\n\nThe Company maintains highly competitive, performance-based compensation programs. Under current guidelines, this position is eligible for an annual performance bonus in accordance with the terms of the applicable plan. The annual performance bonus is a cash bonus intended to provide an incentive to achieve annual targeted results by rewarding for individual and the corporation’s performance over a calendar/ performance year. Bonuses are awarded at the Company’s discretion on an individual basis.\n\nEmployees may be eligible to participate in Company employee benefit programs such as health insurance, savings plan, pension plan, disability plan, vacation pay, sick time, holiday pay, and work, personal and family time off in accordance with the terms of the applicable plans. Additional information can be found through the link below.\n\n\u202f\n\nEligible for benefits to include medical, dental, vision and time off as well as any others as provided for in the applicable Collective Bargaining Agreement.\n\nFor additional general information on company benefits, please go to: - https://www.careers.jnj.com/employee-benefits\n\nJohnson & Johnson is an Affirmative Action and Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, age, national origin, or protected veteran status and will not be discriminated against on the basis of disability.\n\n,'</li><li>"Public Relations Manager - Bullhorn is the global leader in software for the staffing industry. After more than 20 years, more than 10,000 companies rely on Bullhorn’s cloud-based platform to power their staffing processes from start to finish. Led by the original co-founder, partnered with venture capital, and powered by seasoned leaders across a global workforce with an eye toward innovation, Bullhorn has had year over year growth, making it the market leader in the recruitment software space while allowing for new opportunities for over 29% of our employees to advance their careers in the past 12 months.\n\nWe are a remote-first organization and over 38% of our employees reside outside the United States. Headquartered in Boston, we also have offices in London, Brighton, Rotterdam, Frankfurt and Sydney (just in case you’re in the area to stop by). Whether you’re local or remote, our vision is to ensure every employee has a sense of belonging, a voice that is heard, and a clear path for success. Your incredible experience as an employee will consist of flexible work hours to ensure a positive work-life balance and use Zoom, Slack, and other tools to stay connected.\n\nReporting into the Director, Global Content and Communications, the PR / Communications Manager will be responsible for strengthening Bullhorn’s position as a leader in the market, demonstrating our expertise in staffing as well as our growth as a technology business. This role will craft Bullhorn’s point of view on key topics and then work independently and with teammates to get those messages into market through a combination of earned and owned media – all to raise awareness of the business and advance our thought leadership efforts.\n\nA key part of this role will be partnering with our PR agency to identify story angles that highlight Bullhorn’s unique strengths and point of view. Are you obsessed with keeping up with the latest trends in AI? Do you comb the business section of your favourite news outlet trying to parse what broader trends mean to your customers? The Communications Manager needs to have a finger on the pulse of the staffing industry and an interest in tracking the economic and technology trends that will impact it.\n\nAs the lead for Bullhorn’s public relations efforts, this person will also consult with international marketing colleagues on PR efforts in markets outside the U.S.\n\nOccasional (<10%) travel may be required. Ability to work on East Coast hours preferred. Please submit a portfolio or writing samples with your resume.\n\nResponsibilities Include\n\nDeveloping, overseeing, and measuring the performance of Bullhorn’s earned media strategyHandling incoming media inquiries and occasional proactive pitchingManaging PR agency relationshipsManaging executive thought leadership opportunities, including owned contentCultivating a bench of internal experts and customer spokespeopleWriting and editing press releases, bylines, and media comments, when not supported by agencySupporting Bullhorn’s thought leadership strategy by identifying opportunities to publicly promote our GRID research program and other research deliverables\n\nThis Job Might Be a Fit For You If\n\nYou have proven experience in communications or public relations. Agency experience preferred; in-house experience a plus.You’re a strong writer who understands what makes a story compelling and relatable.You have experience working with executives and are comfortable adapting to different communication styles and voices.You have strong stakeholder management skills, can headline challenges and establish priorities, and unlock the power of the group.You thrive in fast-paced settings, with the ability to build relationships remotely.\n\nYou Might Be a Fit For Bullhorn If\n\nYou love working in an agile environment and can roll with the punchesYou take ownership of your work and continuously strive for improvement\n\nWhat We Offer...\n\nBenefits eligibility effective DAY ONE including Medical, Dental, Vision, 401(k), 401(k) Match, and moreUnlimited VacationMental health benefits (EAP & 98point6)Full Access to LinkedIn LearningQuarterly paid volunteer daysLucrative Employee Referral Program (eligible for prior to your first day)Career development opportunities up/across Bullhorn\n\nBullhorn's core purpose is to create an incredible customer experience, which starts with first creating an incredible employee experience. Our vision is for every employee to have a sense of belonging, a voice that is heard, and a clear path for success. We are committed to building diverse and inclusive teams, and our culture is shaped by our five core values: Ownership, Energy, Speed & Agility, Service, and Being Human.\n\nWe’re looking for real-life humans, each with their own unique set of thoughts, beliefs, cultures, identities, and a background and body that is completely individual. We also love humans who have taken less traditional paths of education and believe that experience and learning come in many forms. Together, all these unique individuals make Bullhorn stronger. If you’re reading this, you’re probably applying for/considering applying for a job with us, and we want you to know that Bullhorn is an equal opportunity employer. For us, that means we always have, and will always, strive to be as inclusive as possible in all aspects of employment and that we do not and will not tolerate discrimination of any kind.\n\n,"</li></ul> |
| 1 | <ul><li>'Intermediate Data Scientist - The School of Data Science (SDS) at the University of Virginia (UVA) seeks an Intermediate Data Scientist to work in collaboration with Don Brown, PhD and Sana Syed, MD, MS, focusing on understanding gut structure and function in common gastrointestinal (GI) diseases using cutting-edge machine learning and AI methods. The overarching goal of this work is to personalize care for pediatric patients suffering from chronic GI disease by improving diagnostics, predicting future disease complications, and identifying better disease biomarkers and novel drug targets. Details about the Gastro Science Lab and the Syed lab can be found at https://gastrodatasciencelab.org/ and https://med.virginia.edu/sana-syed-lab/.\n\nThis is a one year restricted position continuation is based on the availability of funding and satisfactory performance.\n\nData Scientists provide sophisticated data management and analysis to support University projects or programs. They focus primarily on high-level data projections and statistical analysis. They manage the design and programming of all data entry forms and the training and supervision of project research coders, student workers, and volunteers. They oversee regular assessments of reliability, submit data on a monthly basis, and assist with literature searches pertinent to various research project topics.\n\nThe Successful Candidate Will\n\nWork in a professional manner and have a strong willingness to learn and improve.Promote a culture of excellence by supporting others and generating new ideas to drive the lab forward.Act as a champion for the lab’s research at local, regional, and national conferences.Drive the collection of new data and the refinement of existing data for new purposes.Independently and creatively analyze data to test or refine hypotheses.Explore and examine data from multiple disparate sources in order to identify, analyze, and report trends in the data.Develop and execute of statistical mathematical and predictive models.Visualize and report data findings creatively in a variety of visual formats to support research presentations, manuscripts, and media write-ups.Establish links across existing data sources and find new interesting data correlations.Lead projects in concept formulation, determination of appropriate statistical methodology, data analysis, research evaluation, and final research reporting.Collaborate across faculty and staff to provide actionable data-driven insights.Formulate and define analytic scope and objectives through research and fact-finding as a self-starter.Be a leader of a lab data science team and provide guidance to less experienced data analysts/scientists.\n\nQualifications\n\nMaster\'s Degree and at least 3 years of relevant experience.Strong Organization and time line management skills .Experience in AI/ML modeling approaches such as: metabolic modeling, convolutional neural networks, and Gradient-weighted Class Activation Mapping.Understand all phases of the analytic process including data collection, preparation, modeling, evaluation, and deployment.\n\nAnticipated hiring range: $100,000 - $120,000 / annual\n\nTo Apply\n\nPlease visit UVA job board: https://jobs.virginia.edu and search for “R0056431”\n\nComplete An Application And Attach\n\nCover LetterCurriculum Vitae \n\nPlease note that multiple documents can be uploaded in the box.\n\nINTERNAL APPLICANTS: Please search for "find jobs" on your workday home page and apply using the internal job board.\n\nReview of applications will begin January 22, 2024 and continue until the position is filled.\n\nFor questions about the position, please contact: Adam Greene, Research Program Officer ([email protected]) For questions about the application process, please contact: Rhiannon O\'Coin ([email protected])\n\nFor more information about the School of Data Science, please see www.datascience.virginia.edu\n\nFor more information about the University of Virginia and the Charlottesville community, please see www.virginia.edu/life/charlottesville and www.embarkuva.com\n\nThe selected candidate will be required to complete a background check at the time of the offer per University policy.\n\nPHYSICAL DEMANDS This is primarily a sedentary job involving extensive use of desktop computers. The job does occasionally require traveling some distance to attend meetings, and programs.\n\nThe University of Virginia, including the UVA Health System which represents the UVA Medical Center, Schools of Medicine and Nursing, UVA Physician’s Group and the Claude Moore Health Sciences Library, are fundamentally committed to the diversity of our faculty and staff. We believe diversity is excellence expressing itself through every person\'s perspectives and lived experiences. We are equal opportunity and affirmative action employers. All qualified applicants will receive consideration for employment without regard to age, color, disability, gender identity or expression, marital status, national or ethnic origin, political affiliation, race, religion, sex (including pregnancy), sexual orientation, veteran status, and family medical or genetic information.,'</li><li>"Artificial Intelligence Engineer - Company Description Shake - social networking \n Role Description This is a part-time hybrid role for an AI Software Engineer at SHAKE. As an AI Software Engineer, you will be responsible for the day-to-day tasks associated with pattern recognition, computer science, neural networks, software development, and natural language processing (NLP). This role is remote work.\n Qualifications Strong knowledge and experience in pattern recognition, computer science, and neural networksProficiency in software development, with a focus on AI technologiesExperience in natural language processing (NLP)Ability to work independently and remotelyExcellent problem-solving and analytical skillsStrong communication and collaboration skillsMaster's or Ph.D. in Computer Science, AI, or related fieldsRelevant industry certifications (e.g., TensorFlow, PyTorch) are a plus,"</li><li>'Senior Staff Data Scientist (Remote) - Company Description\n\nVericast is a big data company. We receive on average over 100 billion intent signals daily, which assist in generating a deep understanding of a person’s interest and in-market signals across 1,300 interest topics. This is coupled with strong geographic targeting, as over 30 billion location signals are collected daily from over one million retail stores and over 120 million households.\n\nData Science plays a crucial role in delivering our solutions today and will play a more prominent role in our future. A typical data science project has a solid mathematical foundation, an exploratory dimension, and a data-driven workflow. This is also true at Vericast. Our data science projects have strong foundations on machine learning, data engineering, and modeling. We are building a privacy-centric future of digital advertising by focusing on web content. We are connecting web content to consumer interest and action, ultimately driving which ads are shown on a webpage.\n\nTo continue our journey, we are seeking data science experts who are passionate about using cutting edge technology and conceiving innovative methods to solve unique and complex problems. As a Senior Staff Data Scientist at Vericast, your contributions will help us stay at the forefront of the AdTech industry.\n\nJob Description\n\nA Senior Staff Data Scientist is a hands-on expert who is passionate about all aspects of data science and can contribute by designing, conducting, and incorporating analyses of large-scale data from a wide variety of sources. This involves converting ambiguous requirements to concrete solutions for exploring data, designing and/or applying appropriate algorithms, documenting the findings, and incorporating the analysis into end-to-end solutions, systems, and platforms. Effective communication with other job disciplines is required. Contributions are expected at a level of results above and beyond entry-level and mid-level Data Scientists.\n\nKey Duties & Responsibilities\n\nHave a wider impact by providing insights and effective leadership into data science, digital media, and data engineering. This individual will have the hands-on skills to be an individual contributor and the experience for mentoring and leading other data scientists (25%)Act often as a technical lead, determining approach, objectives, requirements, features, milestones, implementation tasks, and tradeoffs of end-to-end large scale data science projects, platforms, and systems (25%)Act as a subject matter expert in data science (ML/AI) algorithms and underlying technologies (programming languages and systems) (15%)Design, conduct, and incorporate analyses of large-scale data from a wide variety of sources (15%)Work within the scrum practices in team projects (10%)Contribute to hiring process by screening higher level candidates, team interviews, manager candidates, i.e., act as a "Bar Raiser" (10%)\n\nQualifications\n\nEducation\n\nBachelor\'s Degree in a quantitative discipline (Computer Science, Mathematics, Engineering, Statistics) (Required)Master\'s Degree in a quantitative discipline (Computer Science, Mathematics, Engineering, Statistics) (Desired)Doctorate Degree (Preferred)In lieu of the above education requirements, a combination of experience and education will be considered.\n\nExperience\n\n8 - 10 years Relevant Experience (Required)\n\nKnowledge/Skills/Abilities\n\nStrong analytical skills, with expertise and solid understanding of multiple statistical/analytical machine learning techniques applied at large scale.Technical proficiency in ML algorithms, scalable ML platforms, languages, and tools (Python, Spark, ML/Ops) in a corporate setting is highly desirable.Ability to communicate effectively across multi-disciplinary teams (e.g., data science, engineering and product management, org leadership).Prior experience in applying Data Science in Digital Marketing Technology, Graph Theory, Privacy and Geolocation Data is a plus.\n\nAdditional Information\n\nSalary:$160,000-175,000\n\nThe ultimate compensation offered for the position will depend upon several factors such as skill level, cost of living, experience, and responsibilities.\n\nVericast offers a generous total rewards benefits package that includes medical, dental and vision coverage, 401K and flexible PTO. A wide variety of additional benefits like life insurance, employee assistance and pet insurance are also available, not to mention smart and friendly coworkers!\n\nAt Vericast, we don’t just accept differences - we celebrate them, we support them, and we thrive on them for the benefit of our employees, our clients, and our community.\u202fAs an Equal Opportunity employer, Vericast considers applicants for all positions without regard to race, color, creed, religion, national origin or ancestry, sex, sexual orientation, gender identity, age, disability, genetic information, veteran status, or any other classifications protected by law. Applicants who have disabilities may request that accommodations be made in order to complete the selection process by contacting our Talent Acquisition team at [email protected]. EEO is the law. To review your rights under Equal Employment Opportunity please visit: www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf.\n\n,'</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("m-newhauser/setfit-ml-jobs")
# Run inference
preds = model("Data Scientist - AI Investment - Are you interested in revolutionising the future of AI investment?
My client is looking for a data scientist to tackle intricate business challenges through advanced analytics and machine learning techniques.
You will take charge of both technical prowess, overseeing the creation, implementation, and upkeep of sophisticated machine learning models and algorithms, including extensive language models.
This role offers an exceptional chance to make a substantial impact and establish yourself as a visionary in the realms of data science and AI.
Responsibilities:You'll spearhead the development and implementation of groundbreaking AI and data science solutions.Steering the strategic path of the data science community, remaining at the forefront of applied AI and AI research.Effectively communicating with stakeholders and influencing decision-making.Overseeing project delivery from inception to deployment, ensuring alignment with business goals.Identifying and integrating state-of-the-art technologies, tools, and methodologies to drive value through cost reduction, revenue generation, or enhanced customer experience.
Requirements:Proven AI research in finance industry. Ideally published with multiple citations. Ph.D./Masters/Bachelor's degree in computer science, mathematics, statistics, engineering, or relevant field from a top 10 university in the US or equivalent. Proficiency in key data science tools and methodologies, including Python, PyTorch, TensorFlow, Jax, Numpy, Scikit-learn, time-series forecasting, classification, regression, large-language models, and experiment design.A commitment to staying abreast of the latest advancements in AI research and a drive to continuously push boundaries.Extensive relevant work experience, encompassing a solid grasp of statistical data analysis, machine learning algorithms, and deep learning frameworks.
Join my client on this thrilling journey and contribute to shaping the future of data science and AI in the investment sector.,")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:---------|:-----|
| Word count | 116 | 700.0417 | 2183 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 10 |
| 1 | 14 |
### Training Hyperparameters
- batch_size: (8, 8)
- num_epochs: (4, 4)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-----:|:----:|:-------------:|:---------------:|
| 0.025 | 1 | 0.1975 | - |
| 1.25 | 50 | 0.0018 | - |
| 2.5 | 100 | 0.0002 | - |
| 3.75 | 150 | 0.0002 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 3.0.0
- Transformers: 4.39.0
- PyTorch: 2.3.0+cu121
- Datasets: 2.19.1
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
sd-concepts-library/kairuno | sd-concepts-library | "2023-01-04T20:20:32Z" | 0 | 6 | null | [
"license:mit",
"region:us"
] | null | "2023-01-04T20:01:34Z" | ---
license: mit
---
### kairuno on Stable Diffusion
This is the `kairuno` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:













|
krmanish/whisper-base-pron | krmanish | "2023-12-01T14:00:10Z" | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-12-01T12:39:35Z" | ---
license: apache-2.0
base_model: openai/whisper-base
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-base-pron
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-base-pron
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2335
- Wer: 32.0641
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.2516 | 2.05 | 500 | 0.2910 | 51.4578 |
| 0.1081 | 4.1 | 1000 | 0.2335 | 32.0641 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
dbaibak/q-FrozenLake-v1-8x8-noSlippery | dbaibak | "2022-12-19T14:19:44Z" | 0 | 1 | null | [
"FrozenLake-v1-8x8-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2022-12-19T14:19:27Z" | ---
tags:
- FrozenLake-v1-8x8-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-8x8-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-8x8-no_slippery
type: FrozenLake-v1-8x8-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="dbaibak/q-FrozenLake-v1-8x8-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
aroot/mbart-finetuned-eng-ind-78029440162 | aroot | "2023-06-30T18:18:23Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2023-06-30T18:01:54Z" | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-ind-78029440162
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-78029440162
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8577
- Bleu: 20.4223
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.11.0
|
sergioalves/93007d08-0229-40cf-90ff-3ef32e9f5b96 | sergioalves | "2025-01-12T03:33:28Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-7B",
"base_model:adapter:Qwen/Qwen1.5-7B",
"license:other",
"region:us"
] | null | "2025-01-12T03:03:09Z" | ---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 93007d08-0229-40cf-90ff-3ef32e9f5b96
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0c1d958f35d4dc1c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0c1d958f35d4dc1c_train_data.json
type:
field_input: ''
field_instruction: prompt
field_output: reference_completion
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: sergioalves/93007d08-0229-40cf-90ff-3ef32e9f5b96
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/0c1d958f35d4dc1c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_hf
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4ba2443c-4a2f-477a-9299-35ebbb03b114
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4ba2443c-4a2f-477a-9299-35ebbb03b114
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 93007d08-0229-40cf-90ff-3ef32e9f5b96
This model is a fine-tuned version of [Qwen/Qwen1.5-7B](https://huggingface.co/Qwen/Qwen1.5-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1297
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 1.2000 |
| 1.2574 | 0.0011 | 8 | 1.1804 |
| 1.0476 | 0.0022 | 16 | 1.1459 |
| 1.2233 | 0.0033 | 24 | 1.1297 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
juansgultom/git-base-pokemon | juansgultom | "2024-06-06T14:10:33Z" | 65 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"git",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/git-base",
"base_model:finetune:microsoft/git-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2023-12-01T06:01:49Z" | ---
license: mit
base_model: microsoft/git-base
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: git-base-pokemon
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# git-base-pokemon
This model is a fine-tuned version of [microsoft/git-base](https://huggingface.co/microsoft/git-base) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 6.7529
- Wer Score: 8.92
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Score |
|:-------------:|:-----:|:----:|:---------------:|:---------:|
| 3.844 | 50.0 | 50 | 6.7529 | 8.92 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
Dhahlan2000/Simple_Translation-model-for-GPT-v3 | Dhahlan2000 | "2024-05-27T09:43:28Z" | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:Dhahlan2000/Simple_Translation-model-for-GPT-v2",
"base_model:finetune:Dhahlan2000/Simple_Translation-model-for-GPT-v2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-05-27T09:19:19Z" | ---
base_model: Dhahlan2000/Simple_Translation-model-for-GPT-v2
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: Simple_Translation-model-for-GPT-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Simple_Translation-model-for-GPT-v3
This model is a fine-tuned version of [Dhahlan2000/Simple_Translation-model-for-GPT-v2](https://huggingface.co/Dhahlan2000/Simple_Translation-model-for-GPT-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4882
- Bleu: 30.5908
- Gen Len: 15.3476
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 0.7494 | 1.0 | 4571 | 0.5369 | 28.6585 | 15.3298 |
| 0.6735 | 2.0 | 9142 | 0.4882 | 30.5908 | 15.3476 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
mikeogezi/data_wp_output_gpt_4o_mini_llama-3.2-1b-instruct_lora_32_sample_100_bsz_8 | mikeogezi | "2025-03-30T02:13:59Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-03-30T02:13:54Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hkivancoral/hushem_40x_deit_small_sgd_0001_fold2 | hkivancoral | "2023-12-25T16:38:36Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-12-25T16:23:02Z" | ---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_small_sgd_0001_fold2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5111111111111111
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_sgd_0001_fold2
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2548
- Accuracy: 0.5111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.7252 | 1.0 | 215 | 1.5329 | 0.2667 |
| 1.5501 | 2.0 | 430 | 1.4609 | 0.3111 |
| 1.4663 | 3.0 | 645 | 1.4250 | 0.3111 |
| 1.4117 | 4.0 | 860 | 1.4032 | 0.2889 |
| 1.3533 | 5.0 | 1075 | 1.3870 | 0.2667 |
| 1.3221 | 6.0 | 1290 | 1.3733 | 0.2889 |
| 1.3111 | 7.0 | 1505 | 1.3613 | 0.3111 |
| 1.2698 | 8.0 | 1720 | 1.3509 | 0.3111 |
| 1.2425 | 9.0 | 1935 | 1.3418 | 0.3111 |
| 1.2243 | 10.0 | 2150 | 1.3338 | 0.3778 |
| 1.2016 | 11.0 | 2365 | 1.3268 | 0.3778 |
| 1.1128 | 12.0 | 2580 | 1.3203 | 0.3556 |
| 1.174 | 13.0 | 2795 | 1.3136 | 0.3556 |
| 1.1731 | 14.0 | 3010 | 1.3081 | 0.4 |
| 1.141 | 15.0 | 3225 | 1.3031 | 0.4 |
| 1.1163 | 16.0 | 3440 | 1.2979 | 0.4 |
| 1.1128 | 17.0 | 3655 | 1.2946 | 0.4222 |
| 1.0806 | 18.0 | 3870 | 1.2916 | 0.4222 |
| 1.0332 | 19.0 | 4085 | 1.2893 | 0.3778 |
| 1.0358 | 20.0 | 4300 | 1.2875 | 0.4 |
| 1.0352 | 21.0 | 4515 | 1.2855 | 0.4 |
| 1.0257 | 22.0 | 4730 | 1.2838 | 0.4 |
| 1.0362 | 23.0 | 4945 | 1.2822 | 0.4 |
| 1.0137 | 24.0 | 5160 | 1.2805 | 0.4 |
| 1.0067 | 25.0 | 5375 | 1.2787 | 0.4222 |
| 0.9834 | 26.0 | 5590 | 1.2771 | 0.4667 |
| 0.9889 | 27.0 | 5805 | 1.2753 | 0.4667 |
| 0.9291 | 28.0 | 6020 | 1.2744 | 0.4667 |
| 0.9563 | 29.0 | 6235 | 1.2728 | 0.4667 |
| 0.9949 | 30.0 | 6450 | 1.2710 | 0.4667 |
| 0.9331 | 31.0 | 6665 | 1.2698 | 0.4667 |
| 0.9189 | 32.0 | 6880 | 1.2683 | 0.4889 |
| 0.8977 | 33.0 | 7095 | 1.2667 | 0.4889 |
| 0.9506 | 34.0 | 7310 | 1.2657 | 0.4889 |
| 0.9018 | 35.0 | 7525 | 1.2644 | 0.4889 |
| 0.9085 | 36.0 | 7740 | 1.2632 | 0.4889 |
| 0.9525 | 37.0 | 7955 | 1.2617 | 0.4889 |
| 0.9147 | 38.0 | 8170 | 1.2608 | 0.4889 |
| 0.8837 | 39.0 | 8385 | 1.2597 | 0.5111 |
| 0.9228 | 40.0 | 8600 | 1.2588 | 0.5111 |
| 0.8773 | 41.0 | 8815 | 1.2582 | 0.5111 |
| 0.8964 | 42.0 | 9030 | 1.2574 | 0.5111 |
| 0.8892 | 43.0 | 9245 | 1.2568 | 0.5111 |
| 0.8986 | 44.0 | 9460 | 1.2562 | 0.5111 |
| 0.9114 | 45.0 | 9675 | 1.2557 | 0.5111 |
| 0.8745 | 46.0 | 9890 | 1.2553 | 0.5111 |
| 0.9224 | 47.0 | 10105 | 1.2551 | 0.5111 |
| 0.9229 | 48.0 | 10320 | 1.2549 | 0.5111 |
| 0.9087 | 49.0 | 10535 | 1.2549 | 0.5111 |
| 0.9371 | 50.0 | 10750 | 1.2548 | 0.5111 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
chishti055/outputs | chishti055 | "2025-03-11T09:21:44Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"endpoints_compatible",
"region:us"
] | null | "2025-03-11T09:21:35Z" | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
library_name: transformers
model_name: outputs
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for outputs
This model is a fine-tuned version of [unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit](https://huggingface.co/unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chishti055/outputs", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chishti055-north-south-university/Fine-tune-DeepSeek-R1-Distill-Llama-8B%20on%20PAssFinder/runs/zzbgaajz)
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.3.1
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
oliveirabruno01/almirante-seed-7b-adapter | oliveirabruno01 | "2025-03-11T02:42:50Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-03-11T02:42:40Z" | ---
base_model: unsloth/qwen2.5-7b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** oliveirabruno01
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Subsets and Splits