modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-02 18:27:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 549
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-02 18:24:50
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
sobamchan/roberta-base-mean-softmax-50
|
sobamchan
| 2025-02-16T17:16:53Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:942069",
"loss:MultipleNegativesRankingLoss",
"en",
"dataset:sentence-transformers/all-nli",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-02-16T17:15:57Z |
---
language:
- en
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:942069
- loss:MultipleNegativesRankingLoss
base_model: FacebookAI/roberta-base
widget:
- source_sentence: Two women having drinks and smoking cigarettes at the bar.
sentences:
- Women are celebrating at a bar.
- Two kids are outdoors.
- The four girls are attending the street festival.
- source_sentence: Two male police officers on patrol, wearing the normal gear and
bright green reflective shirts.
sentences:
- The officers have shot an unarmed black man and will not go to prison for it.
- The four girls are playing card games at the table.
- A woman is playing with a toddler.
- source_sentence: 5 women sitting around a table doing some crafts.
sentences:
- The girl wearing a dress skips down the sidewalk.
- The kids are together.
- Five men stand on chairs.
- source_sentence: Three men look on as two other men carve up a freshly barbecued
hog in the backyard.
sentences:
- A group of people prepare cars for racing.
- There are men watching others prepare food
- They are both waiting for a bus.
- source_sentence: The little boy is jumping into a puddle on the street.
sentences:
- A man is wearing a black shirt
- The dog is playing with a ball.
- The boy is outside.
datasets:
- sentence-transformers/all-nli
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on FacebookAI/roberta-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) on the [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) <!-- at revision e2da8e2f811d1448a5b465c236feacd80ffbac7b -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'The little boy is jumping into a puddle on the street.',
'The boy is outside.',
'The dog is playing with a ball.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 942,069 training samples
* Columns: <code>premise</code>, <code>hypothesis</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | premise | hypothesis | label |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 6 tokens</li><li>mean: 17.4 tokens</li><li>max: 50 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 10.69 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>0: ~33.40%</li><li>1: ~33.30%</li><li>2: ~33.30%</li></ul> |
* Samples:
| premise | hypothesis | label |
|:--------------------------------------------------------------------|:---------------------------------------------------------------|:---------------|
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is training his horse for a competition.</code> | <code>1</code> |
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is at a diner, ordering an omelette.</code> | <code>2</code> |
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is outdoors, on a horse.</code> | <code>0</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 19,657 evaluation samples
* Columns: <code>premise</code>, <code>hypothesis</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | premise | hypothesis | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 6 tokens</li><li>mean: 18.46 tokens</li><li>max: 60 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 10.57 tokens</li><li>max: 24 tokens</li></ul> | <ul><li>0: ~33.10%</li><li>1: ~33.30%</li><li>2: ~33.60%</li></ul> |
* Samples:
| premise | hypothesis | label |
|:-------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------|:---------------|
| <code>Two women are embracing while holding to go packages.</code> | <code>The sisters are hugging goodbye while holding to go packages after just eating lunch.</code> | <code>1</code> |
| <code>Two women are embracing while holding to go packages.</code> | <code>Two woman are holding packages.</code> | <code>0</code> |
| <code>Two women are embracing while holding to go packages.</code> | <code>The men are fighting outside a deli.</code> | <code>2</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `learning_rate`: 1e-05
- `warmup_ratio`: 0.1
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Validation Loss |
|:------:|:----:|:---------------:|
| 0.0007 | 5 | 4.4994 |
| 0.0014 | 10 | 4.4981 |
| 0.0020 | 15 | 4.4960 |
| 0.0027 | 20 | 4.4930 |
| 0.0034 | 25 | 4.4890 |
| 0.0041 | 30 | 4.4842 |
| 0.0048 | 35 | 4.4784 |
| 0.0054 | 40 | 4.4716 |
| 0.0061 | 45 | 4.4636 |
| 0.0068 | 50 | 4.4543 |
### Framework Versions
- Python: 3.12.8
- Sentence Transformers: 3.4.1
- Transformers: 4.48.3
- PyTorch: 2.2.0+cu121
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
asimokby/speecht5-asr-base-encoder-ft-decoder
|
asimokby
| 2025-02-16T17:16:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"speecht5",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-02-16T17:15:53Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Zionamsalem/Taxi-Taxi-v3
|
Zionamsalem
| 2025-02-16T17:15:40Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-02-16T17:15:35Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.80
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Zionamsalem/Taxi-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
elijahrenner/gliomagen
|
elijahrenner
| 2025-02-16T17:15:37Z | 0 | 0 | null |
[
"diffusion-models",
"medical-imaging",
"glioma",
"synthetic-data",
"MRI",
"en",
"dataset:BraTS2024",
"license:mit",
"model-index",
"region:us"
] | null | 2025-02-16T16:56:26Z |
---
language: en
tags:
- diffusion-models
- medical-imaging
- glioma
- synthetic-data
- MRI
license: mit
datasets:
- BraTS2024
model-index:
- name: GliomaGen
results:
- task:
type: image-generation
dataset:
name: BraTS2024 Adult Post-Treatment Glioma
type: medical-imaging
metrics:
- name: FID (t1c)
type: frechet-inception-distance
value: 55.2028 ± 3.7446
- name: FID (t2w)
type: frechet-inception-distance
value: 54.9974 ± 3.2271
- name: KID (t1c)
type: kernel-inception-distance
value: 0.0293 ± 0.0019
- name: MS-SSIM (t1c)
type: multi-scale-structural-similarity
value: 0.7647 ± 0.2106
---
# GliomaGen: Conditional Diffusion for Post-Treatment Glioma MRI Generation
GliomaGen is a generative diffusion model tailored for synthesizing post-treatment glioma MRI images based on anatomical masks. It leverages a modified **Med-DDPM** architecture to create high-fidelity MRI images conditioned on segmented anatomical features.
## Model Overview
GliomaGen aims to address data scarcity in post-treatment glioma segmentation tasks by expanding existing datasets with synthetic, high-quality MRI volumes. The model takes anatomical masks as input and generates multi-modal MRI scans conditioned on segmentation labels.
## Model Performance
### **Quantitative Metrics**
| Modality | FID (↓) | KID (↓) | MS-SSIM (↑) |
|----------|--------|--------|-------------|
| t1c | 55.20 ± 3.74 | 0.0293 ± 0.0019 | 0.7647 ± 0.2106 |
| t2w | 54.99 ± 3.23 | 0.0291 ± 0.0010 | 0.6513 ± 0.2881 |
| t1n | 58.46 ± 3.86 | 0.0305 ± 0.0011 | 0.7005 ± 0.2585 |
| t2f | 70.42 ± 4.17 | 0.0370 ± 0.0018 | 0.7842 ± 0.1551 |
## Usage
To use GliomaGen for MRI generation, see the [GitHub repository](https://github.com/elijahrenner/gliomagen).
## BraTS 2024 Adult Post-Treatment Glioma-Synthetic
Alongisde GliomaGen, a synthetic dataset of $N=2124$ MR images is released on HuggingFace.
|
alfageme4/links_3.2-1B_2_epochs
|
alfageme4
| 2025-02-16T17:15:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-02-16T17:15:23Z |
---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** alfageme4
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
YusufDurmaz/llama-exp-1
|
YusufDurmaz
| 2025-02-16T17:15:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-02-16T17:15:13Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** YusufDurmaz
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Ralphch97/DeepSeek_Finetuned_Ralph_v4.0
|
Ralphch97
| 2025-02-16T17:14:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-15T16:19:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
drface/LAURALINKEDIN
|
drface
| 2025-02-16T17:13:53Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-02-16T16:59:22Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: LAURALINKEDIN
---
# Lauralinkedin
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `LAURALINKEDIN` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('drface/LAURALINKEDIN', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Sophie-Rain-Virale-X-Video/Sophie.Rain.Spiderman.Video.Tutorial.Viral.Full.Video.Original.link
|
Sophie-Rain-Virale-X-Video
| 2025-02-16T17:13:19Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-02-16T17:12:21Z |
# Full Video ⤵️⤵️⤵️
<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://leakedvidiohd.blogspot.com/" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1080" data-original-width="1900" height="363" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgVeidLL6ymfeW-cKAP4y4CLmPVZ9PPh2ynVquPPgHpZTbQjONVjsanWU4Jrh3gUeng55ju37HNL8vWfPNNX6CRPi3opmk0wrHKnNdyjxh806IQvUR-SamulbuUwij13Ezc0nIaj8_EGBzGfzbRa36oJ-3-KOWDN0wha3JXiiJQoONnYQJjgA-kVOfRERFB/w640-h363/47f4f435c227df4d25da8238cb85c73cbf3739f9.jpeg" width="640" /></a></div><br /> <p></p>
</h3><a href="https://leakedvidiohd.blogspot.com/">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️</a><h3
</h3><a href="https://leakedvidiohd.blogspot.com/">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️</a><h3
|
Yace19/aidetergent2
|
Yace19
| 2025-02-16T17:12:02Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-02-16T16:28:18Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: AIDETERGENT
---
# Aidetergent2
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `AIDETERGENT` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Yace19/aidetergent2', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
aiamnoone/ppo-Huggy
|
aiamnoone
| 2025-02-16T17:07:06Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2025-02-16T17:02:31Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
|
sharath232/language
|
sharath232
| 2025-02-16T17:06:25Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-02-16T17:06:25Z |
---
license: apache-2.0
---
|
Lucy-in-the-Sky/Qwen2.5-Coder-32B-Instruct-Q2_K-GGUF
|
Lucy-in-the-Sky
| 2025-02-16T17:04:47Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Qwen/Qwen2.5-Coder-32B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-Coder-32B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-02-16T17:03:50Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct/blob/main/LICENSE
language:
- en
base_model: Qwen/Qwen2.5-Coder-32B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
- llama-cpp
- gguf-my-repo
---
# Lucy-in-the-Sky/Qwen2.5-Coder-32B-Instruct-Q2_K-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-Coder-32B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Lucy-in-the-Sky/Qwen2.5-Coder-32B-Instruct-Q2_K-GGUF --hf-file qwen2.5-coder-32b-instruct-q2_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Lucy-in-the-Sky/Qwen2.5-Coder-32B-Instruct-Q2_K-GGUF --hf-file qwen2.5-coder-32b-instruct-q2_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Lucy-in-the-Sky/Qwen2.5-Coder-32B-Instruct-Q2_K-GGUF --hf-file qwen2.5-coder-32b-instruct-q2_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Lucy-in-the-Sky/Qwen2.5-Coder-32B-Instruct-Q2_K-GGUF --hf-file qwen2.5-coder-32b-instruct-q2_k.gguf -c 2048
```
|
sobamchan/roberta-base-mean-400
|
sobamchan
| 2025-02-16T17:03:40Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:557850",
"loss:MultipleNegativesRankingLoss",
"en",
"dataset:sentence-transformers/all-nli",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-02-16T17:02:17Z |
---
language:
- en
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:557850
- loss:MultipleNegativesRankingLoss
base_model: FacebookAI/roberta-base
widget:
- source_sentence: A man is jumping unto his filthy bed.
sentences:
- A young male is looking at a newspaper while 2 females walks past him.
- The bed is dirty.
- The man is on the moon.
- source_sentence: A carefully balanced male stands on one foot near a clean ocean
beach area.
sentences:
- A man is ouside near the beach.
- Three policemen patrol the streets on bikes
- A man is sitting on his couch.
- source_sentence: The man is wearing a blue shirt.
sentences:
- Near the trashcan the man stood and smoked
- A man in a blue shirt leans on a wall beside a road with a blue van and red car
with water in the background.
- A man in a black shirt is playing a guitar.
- source_sentence: The girls are outdoors.
sentences:
- Two girls riding on an amusement part ride.
- a guy laughs while doing laundry
- Three girls are standing together in a room, one is listening, one is writing
on a wall and the third is talking to them.
- source_sentence: A construction worker peeking out of a manhole while his coworker
sits on the sidewalk smiling.
sentences:
- A worker is looking out of a manhole.
- A man is giving a presentation.
- The workers are both inside the manhole.
datasets:
- sentence-transformers/all-nli
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on FacebookAI/roberta-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) on the [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) <!-- at revision e2da8e2f811d1448a5b465c236feacd80ffbac7b -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'A construction worker peeking out of a manhole while his coworker sits on the sidewalk smiling.',
'A worker is looking out of a manhole.',
'The workers are both inside the manhole.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 557,850 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 10.38 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 12.8 tokens</li><li>max: 39 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 13.4 tokens</li><li>max: 50 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:---------------------------------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------------|
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is outdoors, on a horse.</code> | <code>A person is at a diner, ordering an omelette.</code> |
| <code>Children smiling and waving at camera</code> | <code>There are children present</code> | <code>The kids are frowning</code> |
| <code>A boy is jumping on skateboard in the middle of a red bridge.</code> | <code>The boy does a skateboarding trick.</code> | <code>The boy skates down the sidewalk.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 6,584 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 18.02 tokens</li><li>max: 66 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 9.81 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 10.37 tokens</li><li>max: 29 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------|:--------------------------------------------------------|
| <code>Two women are embracing while holding to go packages.</code> | <code>Two woman are holding packages.</code> | <code>The men are fighting outside a deli.</code> |
| <code>Two young children in blue jerseys, one with the number 9 and one with the number 2 are standing on wooden steps in a bathroom and washing their hands in a sink.</code> | <code>Two kids in numbered jerseys wash their hands.</code> | <code>Two kids in jackets walk to school.</code> |
| <code>A man selling donuts to a customer during a world exhibition event held in the city of Angeles</code> | <code>A man selling donuts to a customer.</code> | <code>A woman drinks her coffee in a small cafe.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `learning_rate`: 1e-05
- `warmup_ratio`: 0.1
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0011 | 5 | - | 5.1316 |
| 0.0023 | 10 | - | 5.1293 |
| 0.0034 | 15 | - | 5.1253 |
| 0.0046 | 20 | - | 5.1196 |
| 0.0057 | 25 | - | 5.1120 |
| 0.0069 | 30 | - | 5.1025 |
| 0.0080 | 35 | - | 5.0908 |
| 0.0092 | 40 | - | 5.0768 |
| 0.0103 | 45 | - | 5.0603 |
| 0.0115 | 50 | - | 5.0409 |
| 0.0126 | 55 | - | 5.0183 |
| 0.0138 | 60 | - | 4.9921 |
| 0.0149 | 65 | - | 4.9616 |
| 0.0161 | 70 | - | 4.9262 |
| 0.0172 | 75 | - | 4.8847 |
| 0.0184 | 80 | - | 4.8359 |
| 0.0195 | 85 | - | 4.7789 |
| 0.0206 | 90 | - | 4.7131 |
| 0.0218 | 95 | - | 4.6367 |
| 0.0229 | 100 | 5.1885 | 4.5468 |
| 0.0241 | 105 | - | 4.4403 |
| 0.0252 | 110 | - | 4.3148 |
| 0.0264 | 115 | - | 4.1678 |
| 0.0275 | 120 | - | 3.9960 |
| 0.0287 | 125 | - | 3.7965 |
| 0.0298 | 130 | - | 3.5700 |
| 0.0310 | 135 | - | 3.3183 |
| 0.0321 | 140 | - | 3.0434 |
| 0.0333 | 145 | - | 2.7582 |
| 0.0344 | 150 | - | 2.4786 |
| 0.0356 | 155 | - | 2.2217 |
| 0.0367 | 160 | - | 1.9959 |
| 0.0379 | 165 | - | 1.8082 |
| 0.0390 | 170 | - | 1.6611 |
| 0.0401 | 175 | - | 1.5397 |
| 0.0413 | 180 | - | 1.4406 |
| 0.0424 | 185 | - | 1.3592 |
| 0.0436 | 190 | - | 1.2935 |
| 0.0447 | 195 | - | 1.2393 |
| 0.0459 | 200 | 3.2102 | 1.1935 |
| 0.0470 | 205 | - | 1.1555 |
| 0.0482 | 210 | - | 1.1221 |
| 0.0493 | 215 | - | 1.0947 |
| 0.0505 | 220 | - | 1.0703 |
| 0.0516 | 225 | - | 1.0504 |
| 0.0528 | 230 | - | 1.0319 |
| 0.0539 | 235 | - | 1.0165 |
| 0.0551 | 240 | - | 1.0011 |
| 0.0562 | 245 | - | 0.9874 |
| 0.0574 | 250 | - | 0.9739 |
| 0.0585 | 255 | - | 0.9596 |
| 0.0596 | 260 | - | 0.9462 |
| 0.0608 | 265 | - | 0.9348 |
| 0.0619 | 270 | - | 0.9237 |
| 0.0631 | 275 | - | 0.9136 |
| 0.0642 | 280 | - | 0.9036 |
| 0.0654 | 285 | - | 0.8938 |
| 0.0665 | 290 | - | 0.8842 |
| 0.0677 | 295 | - | 0.8755 |
| 0.0688 | 300 | 1.6043 | 0.8665 |
| 0.0700 | 305 | - | 0.8554 |
| 0.0711 | 310 | - | 0.8430 |
| 0.0723 | 315 | - | 0.8302 |
| 0.0734 | 320 | - | 0.8176 |
| 0.0746 | 325 | - | 0.8079 |
| 0.0757 | 330 | - | 0.7993 |
| 0.0769 | 335 | - | 0.7927 |
| 0.0780 | 340 | - | 0.7864 |
| 0.0791 | 345 | - | 0.7797 |
| 0.0803 | 350 | - | 0.7713 |
| 0.0814 | 355 | - | 0.7635 |
| 0.0826 | 360 | - | 0.7564 |
| 0.0837 | 365 | - | 0.7484 |
| 0.0849 | 370 | - | 0.7418 |
| 0.0860 | 375 | - | 0.7329 |
| 0.0872 | 380 | - | 0.7236 |
| 0.0883 | 385 | - | 0.7142 |
| 0.0895 | 390 | - | 0.7050 |
| 0.0906 | 395 | - | 0.6964 |
| 0.0918 | 400 | 1.3624 | 0.6888 |
### Framework Versions
- Python: 3.12.8
- Sentence Transformers: 3.4.1
- Transformers: 4.48.3
- PyTorch: 2.2.0+cu121
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
noirchan/Llama-3-8B_Suzume_DARE0.5
|
noirchan
| 2025-02-16T17:03:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"base_model:lightblue/suzume-llama-3-8B-japanese",
"base_model:merge:lightblue/suzume-llama-3-8B-japanese",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:merge:meta-llama/Meta-Llama-3-8B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-16T17:00:01Z |
---
base_model:
- meta-llama/Meta-Llama-3-8B-Instruct
- lightblue/suzume-llama-3-8B-japanese
library_name: transformers
tags:
- mergekit
- merge
---
# merged_model
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as a base.
### Models Merged
The following models were included in the merge:
* [lightblue/suzume-llama-3-8B-japanese](https://huggingface.co/lightblue/suzume-llama-3-8B-japanese)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: meta-llama/Meta-Llama-3-8B-Instruct
# No parameters necessary for base model
- model: lightblue/suzume-llama-3-8B-japanese
parameters:
density: 0.5
weight: 0.5
merge_method: dare_ties
base_model: meta-llama/Meta-Llama-3-8B-Instruct
parameters:
int8_mask: true
dtype: bfloat16
```
|
summerdevlin46/glot500-multi-ar-hi-ur
|
summerdevlin46
| 2025-02-16T17:01:47Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:cis-lmu/glot500-base",
"base_model:finetune:cis-lmu/glot500-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-02-15T17:15:05Z |
---
library_name: transformers
license: apache-2.0
base_model: cis-lmu/glot500-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: glot500-multi-ar-hi-ur
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# glot500-multi-ar-hi-ur
This model is a fine-tuned version of [cis-lmu/glot500-base](https://huggingface.co/cis-lmu/glot500-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6304
- Precision: 0.7752
- Recall: 0.7796
- F1: 0.7774
- Accuracy: 0.8290
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 295 | 0.8056 | 0.7341 | 0.7346 | 0.7343 | 0.7913 |
| 0.948 | 2.0 | 590 | 0.6304 | 0.7752 | 0.7796 | 0.7774 | 0.8290 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.0
- Tokenizers 0.21.0
|
punamdevi15/Matrix
|
punamdevi15
| 2025-02-16T17:01:09Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-02-16T17:01:09Z |
---
license: apache-2.0
---
|
sobamchan/roberta-base-mean-300
|
sobamchan
| 2025-02-16T17:00:45Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:557850",
"loss:MultipleNegativesRankingLoss",
"en",
"dataset:sentence-transformers/all-nli",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-02-16T16:59:08Z |
---
language:
- en
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:557850
- loss:MultipleNegativesRankingLoss
base_model: FacebookAI/roberta-base
widget:
- source_sentence: A man is jumping unto his filthy bed.
sentences:
- A young male is looking at a newspaper while 2 females walks past him.
- The bed is dirty.
- The man is on the moon.
- source_sentence: A carefully balanced male stands on one foot near a clean ocean
beach area.
sentences:
- A man is ouside near the beach.
- Three policemen patrol the streets on bikes
- A man is sitting on his couch.
- source_sentence: The man is wearing a blue shirt.
sentences:
- Near the trashcan the man stood and smoked
- A man in a blue shirt leans on a wall beside a road with a blue van and red car
with water in the background.
- A man in a black shirt is playing a guitar.
- source_sentence: The girls are outdoors.
sentences:
- Two girls riding on an amusement part ride.
- a guy laughs while doing laundry
- Three girls are standing together in a room, one is listening, one is writing
on a wall and the third is talking to them.
- source_sentence: A construction worker peeking out of a manhole while his coworker
sits on the sidewalk smiling.
sentences:
- A worker is looking out of a manhole.
- A man is giving a presentation.
- The workers are both inside the manhole.
datasets:
- sentence-transformers/all-nli
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on FacebookAI/roberta-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) on the [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) <!-- at revision e2da8e2f811d1448a5b465c236feacd80ffbac7b -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'A construction worker peeking out of a manhole while his coworker sits on the sidewalk smiling.',
'A worker is looking out of a manhole.',
'The workers are both inside the manhole.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 557,850 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 10.38 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 12.8 tokens</li><li>max: 39 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 13.4 tokens</li><li>max: 50 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:---------------------------------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------------|
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is outdoors, on a horse.</code> | <code>A person is at a diner, ordering an omelette.</code> |
| <code>Children smiling and waving at camera</code> | <code>There are children present</code> | <code>The kids are frowning</code> |
| <code>A boy is jumping on skateboard in the middle of a red bridge.</code> | <code>The boy does a skateboarding trick.</code> | <code>The boy skates down the sidewalk.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 6,584 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 18.02 tokens</li><li>max: 66 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 9.81 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 10.37 tokens</li><li>max: 29 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------|:--------------------------------------------------------|
| <code>Two women are embracing while holding to go packages.</code> | <code>Two woman are holding packages.</code> | <code>The men are fighting outside a deli.</code> |
| <code>Two young children in blue jerseys, one with the number 9 and one with the number 2 are standing on wooden steps in a bathroom and washing their hands in a sink.</code> | <code>Two kids in numbered jerseys wash their hands.</code> | <code>Two kids in jackets walk to school.</code> |
| <code>A man selling donuts to a customer during a world exhibition event held in the city of Angeles</code> | <code>A man selling donuts to a customer.</code> | <code>A woman drinks her coffee in a small cafe.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `learning_rate`: 1e-05
- `warmup_ratio`: 0.1
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0011 | 5 | - | 5.1316 |
| 0.0023 | 10 | - | 5.1293 |
| 0.0034 | 15 | - | 5.1253 |
| 0.0046 | 20 | - | 5.1196 |
| 0.0057 | 25 | - | 5.1120 |
| 0.0069 | 30 | - | 5.1025 |
| 0.0080 | 35 | - | 5.0908 |
| 0.0092 | 40 | - | 5.0768 |
| 0.0103 | 45 | - | 5.0603 |
| 0.0115 | 50 | - | 5.0409 |
| 0.0126 | 55 | - | 5.0183 |
| 0.0138 | 60 | - | 4.9921 |
| 0.0149 | 65 | - | 4.9616 |
| 0.0161 | 70 | - | 4.9262 |
| 0.0172 | 75 | - | 4.8847 |
| 0.0184 | 80 | - | 4.8359 |
| 0.0195 | 85 | - | 4.7789 |
| 0.0206 | 90 | - | 4.7131 |
| 0.0218 | 95 | - | 4.6367 |
| 0.0229 | 100 | 5.1885 | 4.5468 |
| 0.0241 | 105 | - | 4.4403 |
| 0.0252 | 110 | - | 4.3148 |
| 0.0264 | 115 | - | 4.1678 |
| 0.0275 | 120 | - | 3.9960 |
| 0.0287 | 125 | - | 3.7965 |
| 0.0298 | 130 | - | 3.5700 |
| 0.0310 | 135 | - | 3.3183 |
| 0.0321 | 140 | - | 3.0434 |
| 0.0333 | 145 | - | 2.7582 |
| 0.0344 | 150 | - | 2.4786 |
| 0.0356 | 155 | - | 2.2217 |
| 0.0367 | 160 | - | 1.9959 |
| 0.0379 | 165 | - | 1.8082 |
| 0.0390 | 170 | - | 1.6611 |
| 0.0401 | 175 | - | 1.5397 |
| 0.0413 | 180 | - | 1.4406 |
| 0.0424 | 185 | - | 1.3592 |
| 0.0436 | 190 | - | 1.2935 |
| 0.0447 | 195 | - | 1.2393 |
| 0.0459 | 200 | 3.2102 | 1.1935 |
| 0.0470 | 205 | - | 1.1555 |
| 0.0482 | 210 | - | 1.1221 |
| 0.0493 | 215 | - | 1.0947 |
| 0.0505 | 220 | - | 1.0703 |
| 0.0516 | 225 | - | 1.0504 |
| 0.0528 | 230 | - | 1.0319 |
| 0.0539 | 235 | - | 1.0165 |
| 0.0551 | 240 | - | 1.0011 |
| 0.0562 | 245 | - | 0.9874 |
| 0.0574 | 250 | - | 0.9739 |
| 0.0585 | 255 | - | 0.9596 |
| 0.0596 | 260 | - | 0.9462 |
| 0.0608 | 265 | - | 0.9348 |
| 0.0619 | 270 | - | 0.9237 |
| 0.0631 | 275 | - | 0.9136 |
| 0.0642 | 280 | - | 0.9036 |
| 0.0654 | 285 | - | 0.8938 |
| 0.0665 | 290 | - | 0.8842 |
| 0.0677 | 295 | - | 0.8755 |
| 0.0688 | 300 | 1.6043 | 0.8665 |
### Framework Versions
- Python: 3.12.8
- Sentence Transformers: 3.4.1
- Transformers: 4.48.3
- PyTorch: 2.2.0+cu121
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
sobamchan/roberta-base-mean-250
|
sobamchan
| 2025-02-16T16:59:05Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:557850",
"loss:MultipleNegativesRankingLoss",
"en",
"dataset:sentence-transformers/all-nli",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-02-16T16:58:01Z |
---
language:
- en
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:557850
- loss:MultipleNegativesRankingLoss
base_model: FacebookAI/roberta-base
widget:
- source_sentence: A man is jumping unto his filthy bed.
sentences:
- A young male is looking at a newspaper while 2 females walks past him.
- The bed is dirty.
- The man is on the moon.
- source_sentence: A carefully balanced male stands on one foot near a clean ocean
beach area.
sentences:
- A man is ouside near the beach.
- Three policemen patrol the streets on bikes
- A man is sitting on his couch.
- source_sentence: The man is wearing a blue shirt.
sentences:
- Near the trashcan the man stood and smoked
- A man in a blue shirt leans on a wall beside a road with a blue van and red car
with water in the background.
- A man in a black shirt is playing a guitar.
- source_sentence: The girls are outdoors.
sentences:
- Two girls riding on an amusement part ride.
- a guy laughs while doing laundry
- Three girls are standing together in a room, one is listening, one is writing
on a wall and the third is talking to them.
- source_sentence: A construction worker peeking out of a manhole while his coworker
sits on the sidewalk smiling.
sentences:
- A worker is looking out of a manhole.
- A man is giving a presentation.
- The workers are both inside the manhole.
datasets:
- sentence-transformers/all-nli
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on FacebookAI/roberta-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) on the [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) <!-- at revision e2da8e2f811d1448a5b465c236feacd80ffbac7b -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'A construction worker peeking out of a manhole while his coworker sits on the sidewalk smiling.',
'A worker is looking out of a manhole.',
'The workers are both inside the manhole.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 557,850 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 10.38 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 12.8 tokens</li><li>max: 39 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 13.4 tokens</li><li>max: 50 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:---------------------------------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------------|
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is outdoors, on a horse.</code> | <code>A person is at a diner, ordering an omelette.</code> |
| <code>Children smiling and waving at camera</code> | <code>There are children present</code> | <code>The kids are frowning</code> |
| <code>A boy is jumping on skateboard in the middle of a red bridge.</code> | <code>The boy does a skateboarding trick.</code> | <code>The boy skates down the sidewalk.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 6,584 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 18.02 tokens</li><li>max: 66 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 9.81 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 10.37 tokens</li><li>max: 29 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------|:--------------------------------------------------------|
| <code>Two women are embracing while holding to go packages.</code> | <code>Two woman are holding packages.</code> | <code>The men are fighting outside a deli.</code> |
| <code>Two young children in blue jerseys, one with the number 9 and one with the number 2 are standing on wooden steps in a bathroom and washing their hands in a sink.</code> | <code>Two kids in numbered jerseys wash their hands.</code> | <code>Two kids in jackets walk to school.</code> |
| <code>A man selling donuts to a customer during a world exhibition event held in the city of Angeles</code> | <code>A man selling donuts to a customer.</code> | <code>A woman drinks her coffee in a small cafe.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `learning_rate`: 1e-05
- `warmup_ratio`: 0.1
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0011 | 5 | - | 5.1316 |
| 0.0023 | 10 | - | 5.1293 |
| 0.0034 | 15 | - | 5.1253 |
| 0.0046 | 20 | - | 5.1196 |
| 0.0057 | 25 | - | 5.1120 |
| 0.0069 | 30 | - | 5.1025 |
| 0.0080 | 35 | - | 5.0908 |
| 0.0092 | 40 | - | 5.0768 |
| 0.0103 | 45 | - | 5.0603 |
| 0.0115 | 50 | - | 5.0409 |
| 0.0126 | 55 | - | 5.0183 |
| 0.0138 | 60 | - | 4.9921 |
| 0.0149 | 65 | - | 4.9616 |
| 0.0161 | 70 | - | 4.9262 |
| 0.0172 | 75 | - | 4.8847 |
| 0.0184 | 80 | - | 4.8359 |
| 0.0195 | 85 | - | 4.7789 |
| 0.0206 | 90 | - | 4.7131 |
| 0.0218 | 95 | - | 4.6367 |
| 0.0229 | 100 | 5.1885 | 4.5468 |
| 0.0241 | 105 | - | 4.4403 |
| 0.0252 | 110 | - | 4.3148 |
| 0.0264 | 115 | - | 4.1678 |
| 0.0275 | 120 | - | 3.9960 |
| 0.0287 | 125 | - | 3.7965 |
| 0.0298 | 130 | - | 3.5700 |
| 0.0310 | 135 | - | 3.3183 |
| 0.0321 | 140 | - | 3.0434 |
| 0.0333 | 145 | - | 2.7582 |
| 0.0344 | 150 | - | 2.4786 |
| 0.0356 | 155 | - | 2.2217 |
| 0.0367 | 160 | - | 1.9959 |
| 0.0379 | 165 | - | 1.8082 |
| 0.0390 | 170 | - | 1.6611 |
| 0.0401 | 175 | - | 1.5397 |
| 0.0413 | 180 | - | 1.4406 |
| 0.0424 | 185 | - | 1.3592 |
| 0.0436 | 190 | - | 1.2935 |
| 0.0447 | 195 | - | 1.2393 |
| 0.0459 | 200 | 3.2102 | 1.1935 |
| 0.0470 | 205 | - | 1.1555 |
| 0.0482 | 210 | - | 1.1221 |
| 0.0493 | 215 | - | 1.0947 |
| 0.0505 | 220 | - | 1.0703 |
| 0.0516 | 225 | - | 1.0504 |
| 0.0528 | 230 | - | 1.0319 |
| 0.0539 | 235 | - | 1.0165 |
| 0.0551 | 240 | - | 1.0011 |
| 0.0562 | 245 | - | 0.9874 |
| 0.0574 | 250 | - | 0.9739 |
### Framework Versions
- Python: 3.12.8
- Sentence Transformers: 3.4.1
- Transformers: 4.48.3
- PyTorch: 2.2.0+cu121
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
Bulan-Sutena-Yang-Lagi/Bulan-Sutena-1-Menit-14-Detik.Video.Link.Short.Clip.Video.Viral.On.Social.Media.X.Twitter
|
Bulan-Sutena-Yang-Lagi
| 2025-02-16T16:58:38Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-02-16T16:57:50Z |
<a href="https://hd.poltulive.site/viral-videos/?v=Bulan-Sutena-1-Menit"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a>
<a href="https://hd.poltulive.site/viral-videos/?v=Bulan-Sutena-1-Menit">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a> </br>
<a href="https://hd.poltulive.site/viral-videos/?v=Bulan-Sutena-1-Menit">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤)</a> </br>
|
efromomr/llm-course-hw1
|
efromomr
| 2025-02-16T16:58:36Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"ru",
"dataset:IgorVolochay/russian_jokes",
"region:us"
] | null | 2025-02-11T07:57:52Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
datasets:
- IgorVolochay/russian_jokes
language:
- ru
---
# Model card for model
A transformer LM trained on russian_jokes dataset (validation loss = 2.347) with architecture described below.
Recipe details:
* SwiGLU in FeedForward layer
* RoPE positional encoding
* MLA
Usage example:
```python
tokenizer = ByteLevelBPETokenizer.from_pretrained('efromomr/llm-course-hw1')
check_model = TransformerForCausalLM.from_pretrained('efromomr/llm-course-hw1')
check_model = check_model.to(device)
check_model = check_model.eval()
text = "Заходит в бар"
input_ids = torch.tensor(tokenizer.encode(text), device=device)[None, :]
model_output = model.generate(
input_ids, max_new_tokens=200, eos_token_id=tokenizer.eos_token_id, do_sample=True, top_k=10
)
tokenizer.decode(model_output[0].tolist())
"""Заходит в бар и говорит ему: - Скажите, а почему ты такой развезлась?
- Он слышит на этот должен быть возможно, а ты - молодого водки, а ты же сама поймал?
- Да, а вторая бумажка... - Да не могу ничего помещение.
- Ну, и что? - Хочется, потому что у него на деньги надо пить.
- А мне тебе не сказать. Подумаете, вчера тоже возвращаюсь, я тебя не изобрету! - Н"""
```
|
dqcuong1004/whisper-small-vi
|
dqcuong1004
| 2025-02-16T16:57:00Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"vi",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-02-12T09:24:30Z |
---
library_name: transformers
language:
- vi
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: Whisper Small Vi - QC
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Vi - QC
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu121
- Datasets 3.3.0
- Tokenizers 0.21.0
|
philocifer/legal-ft-2
|
philocifer
| 2025-02-16T16:55:37Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:156",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:Snowflake/snowflake-arctic-embed-l",
"base_model:finetune:Snowflake/snowflake-arctic-embed-l",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-02-16T16:55:05Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:156
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-l
widget:
- source_sentence: What is the author's perspective on the environmental impact of
plagiarism machines in the field discussed?
sentences:
- 'Prince Canuma’s excellent, fast moving mlx-vlm project brings vision LLMs to
Apple Silicon as well. I used that recently to run Qwen’s QvQ.
While MLX is a game changer, Apple’s own “Apple Intelligence” features have mostly
been a disappointment. I wrote about their initial announcement in June, and I
was optimistic that Apple had focused hard on the subset of LLM applications that
preserve user privacy and minimize the chance of users getting mislead by confusing
features.'
- 'I think telling people that this whole field is environmentally catastrophic
plagiarism machines that constantly make things up is doing those people a disservice,
no matter how much truth that represents. There is genuine value to be had here,
but getting to that value is unintuitive and needs guidance.
Those of us who understand this stuff have a duty to help everyone else figure
it out.
Everything tagged “llms” on my blog in 2024
Because I undoubtedly missed a whole bunch of things, here’s every long-form post
I wrote in 2024 that I tagged with llms:'
- Meanwhile, it’s increasingly common for end users to develop wildly inaccurate
mental models of how these things work and what they are capable of. I’ve seen
so many examples of people trying to win an argument with a screenshot from ChatGPT—an
inherently ludicrous proposition, given the inherent unreliability of these models
crossed with the fact that you can get them to say anything if you prompt them
right.
- source_sentence: What is the license under which Alibaba's QwQ model was released?
sentences:
- 'Those US export regulations on GPUs to China seem to have inspired some very
effective training optimizations!
The environmental impact got better
A welcome result of the increased efficiency of the models—both the hosted ones
and the ones I can run locally—is that the energy usage and environmental impact
of running a prompt has dropped enormously over the past couple of years.
OpenAI themselves are charging 100x less for a prompt compared to the GPT-3 days.
I have it on good authority that neither Google Gemini nor Amazon Nova (two of
the least expensive model providers) are running prompts at a loss.'
- 'OpenAI are not the only game in town here. Google released their first entrant
in the category, gemini-2.0-flash-thinking-exp, on December 19th.
Alibaba’s Qwen team released their QwQ model on November 28th—under an Apache
2.0 license, and that one I could run on my own machine. They followed that up
with a vision reasoning model called QvQ on December 24th, which I also ran locally.
DeepSeek made their DeepSeek-R1-Lite-Preview model available to try out through
their chat interface on November 20th.
To understand more about inference scaling I recommend Is AI progress slowing
down? by Arvind Narayanan and Sayash Kapoor.'
- 'The boring yet crucial secret behind good system prompts is test-driven development.
You don’t write down a system prompt and find ways to test it. You write down
tests and find a system prompt that passes them.
It’s become abundantly clear over the course of 2024 that writing good automated
evals for LLM-powered systems is the skill that’s most needed to build useful
applications on top of these models. If you have a strong eval suite you can adopt
new models faster, iterate better and build more reliable and useful product features
than your competition.
Vercel’s Malte Ubl:'
- source_sentence: How do longer inputs enhance the problem-solving capabilities of
an LLM compared to shorter prompts?
sentences:
- '19th: Weeknotes: GPT-4o mini, LLM 0.15, sqlite-utils 3.37 and building a staging
environment
August
6th: Weeknotes: a staging environment, a Datasette alpha and a bunch of new LLMs
8th: django-http-debug, a new Django app mostly written by Claude
23rd: Claude’s API now supports CORS requests, enabling client-side applications
26th: Building a tool showing how Gemini Pro can return bounding boxes for objects
in images
September
6th: Calling LLMs from client-side JavaScript, converting PDFs to HTML + weeknotes
10th: Notes from my appearance on the Software Misadventures Podcast
12th: Notes on OpenAI’s new o1 chain-of-thought models
20th: Notes on using LLMs for code'
- 'Longer inputs dramatically increase the scope of problems that can be solved
with an LLM: you can now throw in an entire book and ask questions about its contents,
but more importantly you can feed in a lot of example code to help the model correctly
solve a coding problem. LLM use-cases that involve long inputs are far more interesting
to me than short prompts that rely purely on the information already baked into
the model weights. Many of my tools were built using this pattern.'
- The most recent twist, again from December (December was a lot) is live video.
ChatGPT voice mode now provides the option to share your camera feed with the
model and talk about what you can see in real time. Google Gemini have a preview
of the same feature, which they managed to ship the day before ChatGPT did.
- source_sentence: What capabilities does Google’s Gemini have regarding audio input
and output?
sentences:
- 'Terminology aside, I remain skeptical as to their utility based, once again,
on the challenge of gullibility. LLMs believe anything you tell them. Any systems
that attempts to make meaningful decisions on your behalf will run into the same
roadblock: how good is a travel agent, or a digital assistant, or even a research
tool if it can’t distinguish truth from fiction?
Just the other day Google Search was caught serving up an entirely fake description
of the non-existant movie “Encanto 2”. It turned out to be summarizing an imagined
movie listing from a fan fiction wiki.'
- 'Watching in real time as “slop” becomes a term of art. the way that “spam” became
the term for unwanted emails, “slop” is going in the dictionary as the term for
unwanted AI generated content
I expanded that definition a tiny bit to this:
Slop describes AI-generated content that is both unrequested and unreviewed.
I ended up getting quoted talking about slop in both the Guardian and the NY Times.
Here’s what I said in the NY TImes:
Society needs concise ways to talk about modern A.I. — both the positives and
the negatives. ‘Ignore that email, it’s spam,’ and ‘Ignore that article, it’s
slop,’ are both useful lessons.'
- 'Your browser does not support the audio element.
OpenAI aren’t the only group with a multi-modal audio model. Google’s Gemini also
accepts audio input, and the Google Gemini apps can speak in a similar way to
ChatGPT now. Amazon also pre-announced voice mode for Amazon Nova, but that’s
meant to roll out in Q1 of 2025.
Google’s NotebookLM, released in September, took audio output to a new level by
producing spookily realistic conversations between two “podcast hosts” about anything
you fed into their tool. They later added custom instructions, so naturally I
turned them into pelicans:
Your browser does not support the audio element.'
- source_sentence: How does the context compare a prompt without evals, models, and
UX to an ASML machine?
sentences:
- 'When @v0 first came out we were paranoid about protecting the prompt with all
kinds of pre and post processing complexity.
We completely pivoted to let it rip. A prompt without the evals, models, and especially
UX is like getting a broken ASML machine without a manual'
- 'I’m still trying to figure out the best patterns for doing this for my own work.
Everyone knows that evals are important, but there remains a lack of great guidance
for how to best implement them—I’m tracking this under my evals tag. My SVG pelican
riding a bicycle benchmark is a pale imitation of what a real eval suite should
look like.
Apple Intelligence is bad, Apple’s MLX library is excellent
As a Mac user I’ve been feeling a lot better about my choice of platform this
year.
Last year it felt like my lack of a Linux/Windows machine with an NVIDIA GPU
was a huge disadvantage in terms of trying out new models.'
- 'That’s a total cost of $1.68 to process 68,000 images. That’s so absurdly cheap
I had to run the numbers three times to confirm I got it right.
How good are those descriptions? Here’s what I got from this command:
llm -m gemini-1.5-flash-8b-latest describe -a IMG_1825.jpeg'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.875
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 1.0
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.875
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3333333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.20000000000000004
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10000000000000002
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.875
name: Cosine Recall@1
- type: cosine_recall@3
value: 1.0
name: Cosine Recall@3
- type: cosine_recall@5
value: 1.0
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9429554063988107
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.923611111111111
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.923611111111111
name: Cosine Map@100
---
# SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("philocifer/legal-ft-2")
# Run inference
sentences = [
'How does the context compare a prompt without evals, models, and UX to an ASML machine?',
'When @v0 first came out we were paranoid about protecting the prompt with all kinds of pre and post processing complexity.\nWe completely pivoted to let it rip. A prompt without the evals, models, and especially UX is like getting a broken ASML machine without a manual',
'That’s a total cost of $1.68 to process 68,000 images. That’s so absurdly cheap I had to run the numbers three times to confirm I got it right.\nHow good are those descriptions? Here’s what I got from this command:\nllm -m gemini-1.5-flash-8b-latest describe -a IMG_1825.jpeg',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:----------|
| cosine_accuracy@1 | 0.875 |
| cosine_accuracy@3 | 1.0 |
| cosine_accuracy@5 | 1.0 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.875 |
| cosine_precision@3 | 0.3333 |
| cosine_precision@5 | 0.2 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.875 |
| cosine_recall@3 | 1.0 |
| cosine_recall@5 | 1.0 |
| cosine_recall@10 | 1.0 |
| **cosine_ndcg@10** | **0.943** |
| cosine_mrr@10 | 0.9236 |
| cosine_map@100 | 0.9236 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 156 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 156 samples:
| | sentence_0 | sentence_1 |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 13 tokens</li><li>mean: 20.07 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 130.53 tokens</li><li>max: 204 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:---------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What are some ways the author has used LLMs to improve productivity and entertainment?</code> | <code>So far, I think they’re a net positive. I’ve used them on a personal level to improve my productivity (and entertain myself) in all sorts of different ways. I think people who learn how to use them effectively can gain a significant boost to their quality of life.<br>A lot of people are yet to be sold on their value! Some think their negatives outweigh their positives, some think they are all hot air, and some even think they represent an existential threat to humanity.<br>They’re actually quite easy to build<br>The most surprising thing we’ve learned about LLMs this year is that they’re actually quite easy to build.</code> |
| <code>What concerns do some people have regarding the value and impact of LLMs?</code> | <code>So far, I think they’re a net positive. I’ve used them on a personal level to improve my productivity (and entertain myself) in all sorts of different ways. I think people who learn how to use them effectively can gain a significant boost to their quality of life.<br>A lot of people are yet to be sold on their value! Some think their negatives outweigh their positives, some think they are all hot air, and some even think they represent an existential threat to humanity.<br>They’re actually quite easy to build<br>The most surprising thing we’ve learned about LLMs this year is that they’re actually quite easy to build.</code> |
| <code>What improvements were noted in the intonation of ChatGPT Advanced Voice mode during its rollout?</code> | <code>When ChatGPT Advanced Voice mode finally did roll out (a slow roll from August through September) it was spectacular. I’ve been using it extensively on walks with my dog and it’s amazing how much the improvement in intonation elevates the material. I’ve also had a lot of fun experimenting with the OpenAI audio APIs.<br>Even more fun: Advanced Voice mode can do accents! Here’s what happened when I told it I need you to pretend to be a California brown pelican with a very thick Russian accent, but you talk to me exclusively in Spanish.</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `num_train_epochs`: 10
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | cosine_ndcg@10 |
|:-----:|:----:|:--------------:|
| 1.0 | 16 | 0.9276 |
| 2.0 | 32 | 0.9330 |
| 3.0 | 48 | 0.9301 |
| 3.125 | 50 | 0.9301 |
| 4.0 | 64 | 0.9372 |
| 5.0 | 80 | 0.9401 |
| 6.0 | 96 | 0.9401 |
| 6.25 | 100 | 0.9401 |
| 7.0 | 112 | 0.9430 |
| 8.0 | 128 | 0.9484 |
| 9.0 | 144 | 0.9430 |
| 9.375 | 150 | 0.9430 |
| 10.0 | 160 | 0.9430 |
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.4.1
- Transformers: 4.48.3
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.3.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
sobamchan/roberta-base-mean-100
|
sobamchan
| 2025-02-16T16:54:52Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:557850",
"loss:MultipleNegativesRankingLoss",
"en",
"dataset:sentence-transformers/all-nli",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-02-16T16:53:40Z |
---
language:
- en
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:557850
- loss:MultipleNegativesRankingLoss
base_model: FacebookAI/roberta-base
widget:
- source_sentence: A man is jumping unto his filthy bed.
sentences:
- A young male is looking at a newspaper while 2 females walks past him.
- The bed is dirty.
- The man is on the moon.
- source_sentence: A carefully balanced male stands on one foot near a clean ocean
beach area.
sentences:
- A man is ouside near the beach.
- Three policemen patrol the streets on bikes
- A man is sitting on his couch.
- source_sentence: The man is wearing a blue shirt.
sentences:
- Near the trashcan the man stood and smoked
- A man in a blue shirt leans on a wall beside a road with a blue van and red car
with water in the background.
- A man in a black shirt is playing a guitar.
- source_sentence: The girls are outdoors.
sentences:
- Two girls riding on an amusement part ride.
- a guy laughs while doing laundry
- Three girls are standing together in a room, one is listening, one is writing
on a wall and the third is talking to them.
- source_sentence: A construction worker peeking out of a manhole while his coworker
sits on the sidewalk smiling.
sentences:
- A worker is looking out of a manhole.
- A man is giving a presentation.
- The workers are both inside the manhole.
datasets:
- sentence-transformers/all-nli
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on FacebookAI/roberta-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) on the [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) <!-- at revision e2da8e2f811d1448a5b465c236feacd80ffbac7b -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'A construction worker peeking out of a manhole while his coworker sits on the sidewalk smiling.',
'A worker is looking out of a manhole.',
'The workers are both inside the manhole.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 557,850 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 10.38 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 12.8 tokens</li><li>max: 39 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 13.4 tokens</li><li>max: 50 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:---------------------------------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------------|
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is outdoors, on a horse.</code> | <code>A person is at a diner, ordering an omelette.</code> |
| <code>Children smiling and waving at camera</code> | <code>There are children present</code> | <code>The kids are frowning</code> |
| <code>A boy is jumping on skateboard in the middle of a red bridge.</code> | <code>The boy does a skateboarding trick.</code> | <code>The boy skates down the sidewalk.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 6,584 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 18.02 tokens</li><li>max: 66 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 9.81 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 10.37 tokens</li><li>max: 29 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------|:--------------------------------------------------------|
| <code>Two women are embracing while holding to go packages.</code> | <code>Two woman are holding packages.</code> | <code>The men are fighting outside a deli.</code> |
| <code>Two young children in blue jerseys, one with the number 9 and one with the number 2 are standing on wooden steps in a bathroom and washing their hands in a sink.</code> | <code>Two kids in numbered jerseys wash their hands.</code> | <code>Two kids in jackets walk to school.</code> |
| <code>A man selling donuts to a customer during a world exhibition event held in the city of Angeles</code> | <code>A man selling donuts to a customer.</code> | <code>A woman drinks her coffee in a small cafe.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `learning_rate`: 1e-05
- `warmup_ratio`: 0.1
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0011 | 5 | - | 5.1316 |
| 0.0023 | 10 | - | 5.1293 |
| 0.0034 | 15 | - | 5.1253 |
| 0.0046 | 20 | - | 5.1196 |
| 0.0057 | 25 | - | 5.1120 |
| 0.0069 | 30 | - | 5.1025 |
| 0.0080 | 35 | - | 5.0908 |
| 0.0092 | 40 | - | 5.0768 |
| 0.0103 | 45 | - | 5.0603 |
| 0.0115 | 50 | - | 5.0409 |
| 0.0126 | 55 | - | 5.0183 |
| 0.0138 | 60 | - | 4.9921 |
| 0.0149 | 65 | - | 4.9616 |
| 0.0161 | 70 | - | 4.9262 |
| 0.0172 | 75 | - | 4.8847 |
| 0.0184 | 80 | - | 4.8359 |
| 0.0195 | 85 | - | 4.7789 |
| 0.0206 | 90 | - | 4.7131 |
| 0.0218 | 95 | - | 4.6367 |
| 0.0229 | 100 | 5.1885 | 4.5468 |
### Framework Versions
- Python: 3.12.8
- Sentence Transformers: 3.4.1
- Transformers: 4.48.3
- PyTorch: 2.2.0+cu121
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
Trending-New-Video/Sophie.rain.Onlyfans.Leak.Video.and.Biography
|
Trending-New-Video
| 2025-02-16T16:54:11Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-02-16T16:54:01Z |
<p><a href="https://social.danielwellington.com/srain" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p>
<p><a href="https://social.danielwellington.com/srain" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p>
<p><a href="https://social.danielwellington.com/srain" rel="nofollow"><img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif"></a></p>
|
sobamchan/roberta-base-mean-10
|
sobamchan
| 2025-02-16T16:52:33Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:557850",
"loss:MultipleNegativesRankingLoss",
"en",
"dataset:sentence-transformers/all-nli",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-02-16T16:51:46Z |
---
language:
- en
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:557850
- loss:MultipleNegativesRankingLoss
base_model: FacebookAI/roberta-base
widget:
- source_sentence: A man is jumping unto his filthy bed.
sentences:
- A young male is looking at a newspaper while 2 females walks past him.
- The bed is dirty.
- The man is on the moon.
- source_sentence: A carefully balanced male stands on one foot near a clean ocean
beach area.
sentences:
- A man is ouside near the beach.
- Three policemen patrol the streets on bikes
- A man is sitting on his couch.
- source_sentence: The man is wearing a blue shirt.
sentences:
- Near the trashcan the man stood and smoked
- A man in a blue shirt leans on a wall beside a road with a blue van and red car
with water in the background.
- A man in a black shirt is playing a guitar.
- source_sentence: The girls are outdoors.
sentences:
- Two girls riding on an amusement part ride.
- a guy laughs while doing laundry
- Three girls are standing together in a room, one is listening, one is writing
on a wall and the third is talking to them.
- source_sentence: A construction worker peeking out of a manhole while his coworker
sits on the sidewalk smiling.
sentences:
- A worker is looking out of a manhole.
- A man is giving a presentation.
- The workers are both inside the manhole.
datasets:
- sentence-transformers/all-nli
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on FacebookAI/roberta-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) on the [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) <!-- at revision e2da8e2f811d1448a5b465c236feacd80ffbac7b -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'A construction worker peeking out of a manhole while his coworker sits on the sidewalk smiling.',
'A worker is looking out of a manhole.',
'The workers are both inside the manhole.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 557,850 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 10.38 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 12.8 tokens</li><li>max: 39 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 13.4 tokens</li><li>max: 50 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:---------------------------------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------------|
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is outdoors, on a horse.</code> | <code>A person is at a diner, ordering an omelette.</code> |
| <code>Children smiling and waving at camera</code> | <code>There are children present</code> | <code>The kids are frowning</code> |
| <code>A boy is jumping on skateboard in the middle of a red bridge.</code> | <code>The boy does a skateboarding trick.</code> | <code>The boy skates down the sidewalk.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 6,584 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 18.02 tokens</li><li>max: 66 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 9.81 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 10.37 tokens</li><li>max: 29 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------|:--------------------------------------------------------|
| <code>Two women are embracing while holding to go packages.</code> | <code>Two woman are holding packages.</code> | <code>The men are fighting outside a deli.</code> |
| <code>Two young children in blue jerseys, one with the number 9 and one with the number 2 are standing on wooden steps in a bathroom and washing their hands in a sink.</code> | <code>Two kids in numbered jerseys wash their hands.</code> | <code>Two kids in jackets walk to school.</code> |
| <code>A man selling donuts to a customer during a world exhibition event held in the city of Angeles</code> | <code>A man selling donuts to a customer.</code> | <code>A woman drinks her coffee in a small cafe.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `learning_rate`: 1e-05
- `warmup_ratio`: 0.1
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Validation Loss |
|:------:|:----:|:---------------:|
| 0.0011 | 5 | 5.1316 |
| 0.0023 | 10 | 5.1293 |
### Framework Versions
- Python: 3.12.8
- Sentence Transformers: 3.4.1
- Transformers: 4.48.3
- PyTorch: 2.2.0+cu121
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
Mattia2700/Llama-3.2-1B_ClinicalWhole_8e-06_constant_512
|
Mattia2700
| 2025-02-16T16:50:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-16T14:36:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
WasamiKirua/Samanta-NewGenesis-Llama3.1-7B-DPO
|
WasamiKirua
| 2025-02-16T16:48:29Z | 13 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"dpo",
"eq",
"psychology",
"phylosophy",
"companionship",
"conversational",
"it",
"en",
"dataset:WasamiKirua/Samantha-NeonGenesis-Unsloth-2.0",
"dataset:WasamiKirua/Human-Like-DPO-ita",
"base_model:WasamiKirua/llama-3.1-new-params-16bit",
"base_model:finetune:WasamiKirua/llama-3.1-new-params-16bit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-12T17:12:01Z |
---
base_model: WasamiKirua/llama-3.1-new-params-16bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- dpo
- eq
- psychology
- phylosophy
- companionship
language:
- it
- en
datasets:
- WasamiKirua/Samantha-NeonGenesis-Unsloth-2.0
- WasamiKirua/Human-Like-DPO-ita
library_name: transformers
---
<img src="https://i.postimg.cc/G220FFJz/temp-Imagekmm-N8k.avif" alt="cover" border="0" width="768px" height="1024px">
## Model Overview
**Samanta-NewGenesis-Llama3.1-7B-DPO** is a cutting-edge language model trained to excel in emotionally intelligent, philosophical, and psychological conversations in Italian. This new generation builds upon the foundation of the original Samanta model, enhancing its reasoning, emotional depth, and conversational fluency.
## Key Features
- **Multi-Turn Emotional EQ Conversations**: Trained on a carefully crafted Italian dataset designed to emphasize emotional intelligence and nuanced discussions.
- **Enhanced Reasoning & Sentimentality**: Incorporates custom reasoning techniques and fine-tuned responses influenced by philosophical discourse, psychological insights, and carefully selected song lyrics and movie scripts.
- **Refined Human-Like Interactions**: A two-stage training approach was used:
- **Supervised Fine-Tuning (SFT)**: Establishing a strong conversational and emotional foundation.
- **Direct Preference Optimization (DPO)**: Fine-tuned to generate human-like responses and reduce unnecessary refusals, allowing for more natural and engaging interactions.
- **NSFW-Aware Capabilities**: While the model has been trained on NSFW content, its primary focus remains on emotional intelligence and companionship. It can engage in such discussions when explicitly instructed, but it is **not designed to be a waifu or purely NSFW-oriented model**.
## Training Process
- **Dataset**: A curated Italian multi-turn dataset focusing on deep emotional understanding, philosophy, and psychology.
- **Fine-Tuning Approach**:
- **Stage 1**: Supervised Fine-Tuning (SFT) to develop conversational depth and EQ.
- **Stage 2**: Direct Preference Optimization (DPO) to refine human-like response generation and minimize refusal patterns.
- **Content Sources**:
- Carefully selected philosophical and psychological discussions.
- Sentimentally rich texts, including song lyrics and movie scripts, to enhance emotional expressiveness.
- NSFW data included as an optional component, ensuring controlled adaptability rather than being a primary focus.
## Usage & Considerations
- **Primary Use**: Emotional and intellectual companionship, psychological and philosophical discussions, and nuanced reasoning-based conversations.
- **NSFW Interaction**: Available when explicitly requested but remains secondary to the model's primary focus on emotional intelligence.
- **Ethical Use**: This model is designed for constructive and meaningful interactions. Misuse, including promoting harm, misinformation, or unethical applications, is strongly discouraged.
## Model Limitations
- **Cultural Context**: Trained primarily on Italian datasets, limiting its effectiveness in other languages or cultural nuances.
- **Bias & Safety**: While efforts have been made to ensure safe interactions, users should be mindful of potential biases or unexpected outputs in edge cases.
- **Not a Replacement for Professional Advice**: The model is not a licensed therapist or psychologist and should not be used as a substitute for professional mental health support.
## Conclusion
Samanta-NewGenesis-Llama3.1-7B-DPO represents a significant evolution in AI-driven emotional intelligence, reasoning, and companionship. By combining psychological and philosophical depth with refined human-like interaction, it offers an engaging and meaningful conversational experience.
For any questions, feedback, or collaborations, feel free to reach out!
## You can use this model in Ollama with the following template (llama3.1)
```
FROM {__FILE_LOCATION__}
TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
{{ .Response }}<|eot_id|>"""
PARAMETER stop "<|start_header_id|>"
PARAMETER stop "<|end_header_id|>"
PARAMETER stop "<|eot_id|>"
PARAMETER temperature 1.5
PARAMETER min_p 0.1
```
# Trained using Unsloth
- **Developed by:** WasamiKirua
- **Finetuned from model :** WasamiKirua/llama-3.1-new-params-16bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Nitrals-Quants/Lieutenant_BMO-10B-Q4_K_M-GGUF
|
Nitrals-Quants
| 2025-02-16T16:47:55Z | 0 | 0 | null |
[
"gguf",
"prune",
"test",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:Nitral-AI/Lieutenant_BMO-10B",
"base_model:quantized:Nitral-AI/Lieutenant_BMO-10B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-16T16:47:26Z |
---
license: other
language:
- en
base_model: Nitral-AI/Lieutenant_BMO-10B
tags:
- prune
- test
- llama-cpp
- gguf-my-repo
---
# Nitral-AI/Lieutenant_BMO-10B-Q4_K_M-GGUF
This model was converted to GGUF format from [`Nitral-AI/Lieutenant_BMO-10B`](https://huggingface.co/Nitral-AI/Lieutenant_BMO-10B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Nitral-AI/Lieutenant_BMO-10B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Nitral-AI/Lieutenant_BMO-10B-Q4_K_M-GGUF --hf-file lieutenant_bmo-10b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Nitral-AI/Lieutenant_BMO-10B-Q4_K_M-GGUF --hf-file lieutenant_bmo-10b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Nitral-AI/Lieutenant_BMO-10B-Q4_K_M-GGUF --hf-file lieutenant_bmo-10b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Nitral-AI/Lieutenant_BMO-10B-Q4_K_M-GGUF --hf-file lieutenant_bmo-10b-q4_k_m.gguf -c 2048
```
|
martamimg/FoKni2s4
|
martamimg
| 2025-02-16T16:45:06Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-02-16T16:19:07Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: FoKni2s4
---
# Fokni2S4
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `FoKni2s4` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('martamimg/FoKni2s4', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
leixa/164d0548-262c-4ac9-b355-26ac18045ef4
|
leixa
| 2025-02-16T16:43:10Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:oopsung/llama2-7b-koNqa-test-v1",
"base_model:adapter:oopsung/llama2-7b-koNqa-test-v1",
"region:us"
] | null | 2025-02-16T12:51:05Z |
---
library_name: peft
base_model: oopsung/llama2-7b-koNqa-test-v1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 164d0548-262c-4ac9-b355-26ac18045ef4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: oopsung/llama2-7b-koNqa-test-v1
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e9164fe230588180_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e9164fe230588180_train_data.json
type:
field_instruction: abstr
field_output: title
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 150
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: leixa/164d0548-262c-4ac9-b355-26ac18045ef4
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 10
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: constant
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 1200
micro_batch_size: 4
mlflow_experiment_name: /tmp/e9164fe230588180_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optim_args:
adam_beta1: 0.9
adam_beta2: 0.999
adam_epsilon: 1e-08
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 150
saves_per_epoch: null
sequence_len: 512
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: acopia-grant
wandb_mode: online
wandb_name: 713ee57a-8b11-40c0-bad6-a28613c7ea55
wandb_project: Gradients-On-112
wandb_run: your_name
wandb_runid: 713ee57a-8b11-40c0-bad6-a28613c7ea55
warmup_steps: 50
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 164d0548-262c-4ac9-b355-26ac18045ef4
This model is a fine-tuned version of [oopsung/llama2-7b-koNqa-test-v1](https://huggingface.co/oopsung/llama2-7b-koNqa-test-v1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1238
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.999,adam_epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 50
- training_steps: 1200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 2.2588 |
| 1.4025 | 0.0051 | 150 | 1.1906 |
| 1.377 | 0.0101 | 300 | 1.1650 |
| 1.4026 | 0.0152 | 450 | 1.1665 |
| 1.3352 | 0.0202 | 600 | 1.1488 |
| 1.3979 | 0.0253 | 750 | 1.1382 |
| 1.2981 | 0.0303 | 900 | 1.1357 |
| 1.3424 | 0.0354 | 1050 | 1.1416 |
| 1.4245 | 0.0404 | 1200 | 1.1238 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
DomainInsAdap/Meta-Llama-3.1-8B-Instruct-evol-instruct-70k-v1-5-2e-05-epoch-3
|
DomainInsAdap
| 2025-02-16T16:38:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-02-16T16:38:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Yace19/ai-detergent
|
Yace19
| 2025-02-16T16:38:33Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-02-16T16:03:40Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: AIDETERGENT
---
# Ai Detergent
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `AIDETERGENT` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Yace19/ai-detergent', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
juliushanusch/chronos-large-final-fine-tuned-day-ahead-prices
|
juliushanusch
| 2025-02-16T16:38:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-02-16T16:21:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
wochaori/model
|
wochaori
| 2025-02-16T16:38:29Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-16T16:36:10Z |
---
base_model: unsloth/llama-3-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** wochaori
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/BuRP-Lomaid-v0.1-7B-bf16-i1-GGUF
|
mradermacher
| 2025-02-16T16:35:37Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"ChaoticNeutrals/BuRP_7B",
"SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE",
"en",
"base_model:Uncanned/BuRP-Lomaid-v0.1-7B-bf16",
"base_model:quantized:Uncanned/BuRP-Lomaid-v0.1-7B-bf16",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-02-16T15:49:58Z |
---
base_model: Uncanned/BuRP-Lomaid-v0.1-7B-bf16
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- ChaoticNeutrals/BuRP_7B
- SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Uncanned/BuRP-Lomaid-v0.1-7B-bf16
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/BuRP-Lomaid-v0.1-7B-bf16-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/BuRP-Lomaid-v0.1-7B-bf16-i1-GGUF/resolve/main/BuRP-Lomaid-v0.1-7B-bf16.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/BuRP-Lomaid-v0.1-7B-bf16-i1-GGUF/resolve/main/BuRP-Lomaid-v0.1-7B-bf16.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/BuRP-Lomaid-v0.1-7B-bf16-i1-GGUF/resolve/main/BuRP-Lomaid-v0.1-7B-bf16.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/BuRP-Lomaid-v0.1-7B-bf16-i1-GGUF/resolve/main/BuRP-Lomaid-v0.1-7B-bf16.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/BuRP-Lomaid-v0.1-7B-bf16-i1-GGUF/resolve/main/BuRP-Lomaid-v0.1-7B-bf16.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/BuRP-Lomaid-v0.1-7B-bf16-i1-GGUF/resolve/main/BuRP-Lomaid-v0.1-7B-bf16.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/BuRP-Lomaid-v0.1-7B-bf16-i1-GGUF/resolve/main/BuRP-Lomaid-v0.1-7B-bf16.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/BuRP-Lomaid-v0.1-7B-bf16-i1-GGUF/resolve/main/BuRP-Lomaid-v0.1-7B-bf16.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/BuRP-Lomaid-v0.1-7B-bf16-i1-GGUF/resolve/main/BuRP-Lomaid-v0.1-7B-bf16.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/BuRP-Lomaid-v0.1-7B-bf16-i1-GGUF/resolve/main/BuRP-Lomaid-v0.1-7B-bf16.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/BuRP-Lomaid-v0.1-7B-bf16-i1-GGUF/resolve/main/BuRP-Lomaid-v0.1-7B-bf16.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/BuRP-Lomaid-v0.1-7B-bf16-i1-GGUF/resolve/main/BuRP-Lomaid-v0.1-7B-bf16.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/BuRP-Lomaid-v0.1-7B-bf16-i1-GGUF/resolve/main/BuRP-Lomaid-v0.1-7B-bf16.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/BuRP-Lomaid-v0.1-7B-bf16-i1-GGUF/resolve/main/BuRP-Lomaid-v0.1-7B-bf16.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/BuRP-Lomaid-v0.1-7B-bf16-i1-GGUF/resolve/main/BuRP-Lomaid-v0.1-7B-bf16.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/BuRP-Lomaid-v0.1-7B-bf16-i1-GGUF/resolve/main/BuRP-Lomaid-v0.1-7B-bf16.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/BuRP-Lomaid-v0.1-7B-bf16-i1-GGUF/resolve/main/BuRP-Lomaid-v0.1-7B-bf16.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/BuRP-Lomaid-v0.1-7B-bf16-i1-GGUF/resolve/main/BuRP-Lomaid-v0.1-7B-bf16.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/BuRP-Lomaid-v0.1-7B-bf16-i1-GGUF/resolve/main/BuRP-Lomaid-v0.1-7B-bf16.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/BuRP-Lomaid-v0.1-7B-bf16-i1-GGUF/resolve/main/BuRP-Lomaid-v0.1-7B-bf16.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BuRP-Lomaid-v0.1-7B-bf16-i1-GGUF/resolve/main/BuRP-Lomaid-v0.1-7B-bf16.i1-Q4_1.gguf) | i1-Q4_1 | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/BuRP-Lomaid-v0.1-7B-bf16-i1-GGUF/resolve/main/BuRP-Lomaid-v0.1-7B-bf16.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/BuRP-Lomaid-v0.1-7B-bf16-i1-GGUF/resolve/main/BuRP-Lomaid-v0.1-7B-bf16.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/BuRP-Lomaid-v0.1-7B-bf16-i1-GGUF/resolve/main/BuRP-Lomaid-v0.1-7B-bf16.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Sophie-Rain-SpiderMan-Video-Free/Sophie.Rain.Leaked.Video.Tutorial
|
Sophie-Rain-SpiderMan-Video-Free
| 2025-02-16T16:35:11Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-02-16T16:31:00Z |
18 seconds ago
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️</a></p>
<p><a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤 Download❤️❤️⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Sophie Rain Spiderman Original Viral video Nudes took the internet by storm and amazed viewers on various Leaked social media platforms. Sophie Rain Spiderman, a young and talented digital creator, recently became famous thanks to this interesting video.
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter Telegram
✅❤❤==►► https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman
Sophie Rain Spiderman Original Viral video Nudes took the internet by storm and amazed viewers on various Leaked social media platforms. Sophie Rain Spiderman, a young and talented digital creator, recently became famous thanks to this interesting video.
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
. . . . . . . . . L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter Telegram
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter. , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
|
SOPHIE-RAIN-SPIDERMAN-SEX-VIDEO-LINK/Sex.sophie.rain.spiderman.viral.videos.link.on.social.media.x.trending.now
|
SOPHIE-RAIN-SPIDERMAN-SEX-VIDEO-LINK
| 2025-02-16T16:32:26Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-02-16T16:16:52Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5n6bjbnr?v=news-es-tv" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Deekila-Sherpa-Aniket-Videos-Free/Deekila.And.Aniket.Leaked.Video.On.Social.Media.X.Twitter
|
Deekila-Sherpa-Aniket-Videos-Free
| 2025-02-16T16:30:52Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-02-16T16:29:07Z |
[🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)](https://lekedvideo.xyz/watch/?v=Deekila-Sherpa-Aniket)
[🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://lekedvideo.xyz/watch/?v=Deekila-Sherpa-Aniket)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://lekedvideo.xyz/watch/?v=Deekila-Sherpa-Aniket)
|
sanujen/mBART_Tamil_Colloquial_to_Standard
|
sanujen
| 2025-02-16T16:30:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"text-generation-inference",
"ta",
"base_model:facebook/mbart-large-50-many-to-many-mmt",
"base_model:finetune:facebook/mbart-large-50-many-to-many-mmt",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-02-15T20:40:36Z |
---
license: apache-2.0
language:
- ta
base_model:
- facebook/mbart-large-50-many-to-many-mmt
pipeline_tag: text2text-generation
library_name: transformers
tags:
- text-generation-inference
---
|
marcelbinz/Llama-3.1-RandomInit-70B
|
marcelbinz
| 2025-02-16T16:28:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-16T16:09:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/BuRP-Lomaid-v0.1-7B-bf16-GGUF
|
mradermacher
| 2025-02-16T16:28:44Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"ChaoticNeutrals/BuRP_7B",
"SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE",
"en",
"base_model:Uncanned/BuRP-Lomaid-v0.1-7B-bf16",
"base_model:quantized:Uncanned/BuRP-Lomaid-v0.1-7B-bf16",
"endpoints_compatible",
"region:us"
] | null | 2025-02-16T14:24:48Z |
---
base_model: Uncanned/BuRP-Lomaid-v0.1-7B-bf16
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- ChaoticNeutrals/BuRP_7B
- SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Uncanned/BuRP-Lomaid-v0.1-7B-bf16
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/BuRP-Lomaid-v0.1-7B-bf16-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/BuRP-Lomaid-v0.1-7B-bf16-GGUF/resolve/main/BuRP-Lomaid-v0.1-7B-bf16.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/BuRP-Lomaid-v0.1-7B-bf16-GGUF/resolve/main/BuRP-Lomaid-v0.1-7B-bf16.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/BuRP-Lomaid-v0.1-7B-bf16-GGUF/resolve/main/BuRP-Lomaid-v0.1-7B-bf16.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/BuRP-Lomaid-v0.1-7B-bf16-GGUF/resolve/main/BuRP-Lomaid-v0.1-7B-bf16.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/BuRP-Lomaid-v0.1-7B-bf16-GGUF/resolve/main/BuRP-Lomaid-v0.1-7B-bf16.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/BuRP-Lomaid-v0.1-7B-bf16-GGUF/resolve/main/BuRP-Lomaid-v0.1-7B-bf16.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BuRP-Lomaid-v0.1-7B-bf16-GGUF/resolve/main/BuRP-Lomaid-v0.1-7B-bf16.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BuRP-Lomaid-v0.1-7B-bf16-GGUF/resolve/main/BuRP-Lomaid-v0.1-7B-bf16.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/BuRP-Lomaid-v0.1-7B-bf16-GGUF/resolve/main/BuRP-Lomaid-v0.1-7B-bf16.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/BuRP-Lomaid-v0.1-7B-bf16-GGUF/resolve/main/BuRP-Lomaid-v0.1-7B-bf16.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/BuRP-Lomaid-v0.1-7B-bf16-GGUF/resolve/main/BuRP-Lomaid-v0.1-7B-bf16.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/BuRP-Lomaid-v0.1-7B-bf16-GGUF/resolve/main/BuRP-Lomaid-v0.1-7B-bf16.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
radce/Llama-3.2-3B-ru-v1.1
|
radce
| 2025-02-16T16:27:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-16T15:08:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DmitryYarov/aristotle_interface
|
DmitryYarov
| 2025-02-16T16:26:40Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:ai-forever/rugpt3small_based_on_gpt2",
"base_model:finetune:ai-forever/rugpt3small_based_on_gpt2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-16T14:33:31Z |
---
library_name: transformers
base_model: ai-forever/rugpt3small_based_on_gpt2
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: aristototle_interface
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aristototle_interface
This model is a fine-tuned version of [ai-forever/rugpt3small_based_on_gpt2](https://huggingface.co/ai-forever/rugpt3small_based_on_gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0259
- Accuracy: 0.4040
Model Description
This model is a fine-tuned version of ai-forever/rugpt3small_based_on_gpt2, designed for causal language modeling tasks. It has been trained on a custom dataset to generate coherent and contextually relevant text.
Training Details
Training Epochs: 29.86
Total FLOPs: 8,153,103 GF
Training Loss: 3.8147
Training Runtime: 35 minutes and 43.75 seconds
Number of Training Samples: 291
Training Samples per Second: 4.072
Training Steps per Second: 0.056
Evaluation Metrics
Evaluation Epoch: 29.86
Evaluation Accuracy: 40.4%
Evaluation Loss: 3.0259
Evaluation Runtime: 0.12 seconds
Number of Evaluation Samples: 1
Evaluation Samples per Second: 8.08
Evaluation Steps per Second: 8.08
Perplexity: 20.6125
Intended Use
This model is intended for text generation tasks where coherent and contextually appropriate responses are required. It can be used in applications such as chatbots, content creation, and more.
Limitations
The model has been trained on a limited dataset (291 samples), which may affect its generalization capabilities.
The evaluation accuracy of approximately 40% indicates that the model may not perform optimally across all contexts.
The perplexity score suggests room for improvement in generating more confident predictions.
Future Work
To enhance the performance of this model, consider the following:
Increase the size and diversity of the training dataset.
Experiment with additional training epochs or different hyperparameters.
Evaluate the model on a broader set of examples to better assess its capabilities.
## Training procedure
## [Training Procedure](pplx://action/followup)
The model was trained using the `transformers` library and the `run_clm.py` script. Here's a summary of the training process:
* **[Model](pplx://action/followup):** `ai-forever/rugpt3small_based_on_gpt2` (a Russian language GPT-2 model).
* **[Objective](pplx://action/followup):** Causal Language Modeling (text generation).
* **[Hardware](pplx://action/followup):** Google Colab with a single CUDA-enabled GPU.
* **[Mixed Precision](pplx://action/followup):** FP16 training was enabled to reduce memory footprint and potentially improve training speed.
* **[Optimizer](pplx://action/followup):** AdamW (`adamw_torch`) was used as the optimizer.
* **[Learning Rate](pplx://action/followup):** The learning rate was set to `3e-5`.
* **[Warmup](pplx://action/followup):** A linear warmup schedule with `500` warmup steps was used.
* **[Training Data](pplx://action/followup):** Custom text dataset loaded from `
The model was trained on a custom text dataset loaded from the following sources using the `plain_text` dataset configuration:
* **Training set:** Aristotle's major works. (32,835 examples)
* Аристотель. Категории
* Аристотель. Никомахова этика
* Аристотель. Физика
* Аристотель. Метафизика
* Аристотель. Риторика
* Аристотель. Поэтика
* **[Validation Data](pplx://action/followup):** Custom text dataset loaded from `- Аристотель. Никомахова этика ttps://lib.ru/POEEAST/ARISTOTEL/nikomah.txt` using the `plain_text` dataset configuration. The validation set contained 111 examples.
* **Validation set:** Aristotle. Никомахова этика (111 examples)
*
* **[Batch Size](pplx://action/followup):** A per-device batch size of `8` was used with a gradient accumulation size of `8`, resulting in an effective batch size of 64.
* **[Sequence Length](pplx://action/followup):** The maximum sequence length (block size) was set to `2048`.
* **[Gradient Checkpointing](pplx://action/followup):** Enabled to reduce memory consumption.
* **[Epochs](pplx://action/followup):** Trained for `30` epochs. The training data was passed over 30 times.
* **[Evaluation](pplx://action/followup):** Evaluation was performed every `1000` steps using the validation dataset.
* **[Logging](pplx://action/followup):** Training progress and metrics were logged every `100` steps to TensorBoard and Weights & Biases (WandB).
* **[Checkpoints](pplx://action/followup):** Model checkpoints were saved every `1000` steps, with a limit of `3` saved checkpoints.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.49.0.dev0
- Pytorch 2.5.1+cu124
- Datasets 3.3.0
- Tokenizers 0.21.0
|
daniel40/dea22ed2-54ae-4395-ac92-c26c345d2e93
|
daniel40
| 2025-02-16T16:26:23Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Hermes-2-Theta-Llama-3-8B",
"base_model:adapter:NousResearch/Hermes-2-Theta-Llama-3-8B",
"license:apache-2.0",
"region:us"
] | null | 2025-02-16T14:47:38Z |
---
library_name: peft
license: apache-2.0
base_model: NousResearch/Hermes-2-Theta-Llama-3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: dea22ed2-54ae-4395-ac92-c26c345d2e93
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dea22ed2-54ae-4395-ac92-c26c345d2e93
This model is a fine-tuned version of [NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2849
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
susmitsil/ppo-LunarLander-ss-v2
|
susmitsil
| 2025-02-16T16:23:30Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-02-16T16:15:48Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 262.26 +/- 17.72
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Klindle/gawk_toon3000
|
Klindle
| 2025-02-16T16:22:59Z | 0 | 0 | null |
[
"hunyuan",
"hunyuan-video",
"hunyuan-lora",
"lora",
"replicate",
"text-to-video",
"en",
"base_model:tencent/HunyuanVideo",
"base_model:adapter:tencent/HunyuanVideo",
"license:other",
"region:us"
] |
text-to-video
| 2025-02-16T07:54:37Z |
---
license: other
license_name: tencent-hunyuan-community
license_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE
language:
- en
tags:
- hunyuan
- hunyuan-video
- hunyuan-lora
- lora
- replicate
base_model: "tencent/HunyuanVideo"
pipeline_tag: text-to-video
# widget:
# - text: >-
# prompt
# output:
# url: https://...
---
# Gawk_Toon3000
<Gallery />
Trained on Replicate using:
https://replicate.com/zsxkib/hunyuan-video-lora/train
|
Daytona500-Free/LIVE
|
Daytona500-Free
| 2025-02-16T16:22:33Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-02-16T16:18:47Z |
[🔴 GO LIVE==►► CLICK HERE TO WATCH NOW](https://lekedvideo.xyz/allsports2025/?V=LIVE)
[🔴 STREAMING==►► CLICK HERE TO WATCH NOW LIVE](https://lekedvideo.xyz/allsports2025/?V=LIVE)
[<img alt="fsd" src="https://i.postimg.cc/Gmqvfxfh/tv-image.gif">](https://lekedvideo.xyz/allsports2025/?V=LIVE)
The Super Bowl might be over (and not the most satisfying), but, this Sunday we get the Stock Car Super Bowl: The 2025 Daytona 500. The premier NASCAR event sees 45 racers on Florida’s Daytona International Speedway chasing the Harley J. Earl Trophy.
At a Glance: How to Watch Daytona 500 Stream: DirecTV Stream, Fubo, Hulu + Live TV, Sling TV Channel: FOX Date, Start Time: Sunday, Feb. 16 at 2:30 p.m. ET If you’re looking to watch the 2025 Daytona 500 today, read on. Below is a full guide on the best ways to livestream the NASCAR race without cable, plus key details about the big race.
How to Watch the 2025 Daytona 500 Online The 2025 Daytona 500 airs live on FOX. If you don’t have cable, you’ll want to get a live TV streaming service to watch the Daytona 500 online. Below are some of the best options that carry FOX — most of which offer free trials.
2025 Daytona 500 Schedule: Date, Start Time The 2025 Daytona 500 is happening today, Sunday, Feb. 16. FOX coverage starts at 2:30 p.m. ET and the flag wave is scheduled for 3:11 p.m. ET.
2025 Daytona 500 Lineup, Favorites One of the Daytona 500’s draws is how hard it is to predict. However, of the 45 drivers, there are a few standouts to watch. Kyle Busch is currently the favorite (+1100), according to oddsmakers, despite never winning the race in his 19 seasons. Denny Hamlin (+1200), Ryan Blaney (+1200), Joey Logano (+1300), Chase Elliott (+1300), and Brad Keselowski (+1400) and Kyle Larson round out the top contenders.
|
BAFTA-Free/BAFTA.Film.Awards.2025.Live.Streams.Free.Reddit
|
BAFTA-Free
| 2025-02-16T16:22:21Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-02-16T16:13:47Z |
BAFTA 2025 LIVE on OTT: Here's when and where you can stream David Tennant-hosted awards show.
[🔴 GO LIVE==►► CLICK HERE TO WATCH NOW](https://lekedvideo.xyz/allsports2025/?V=LIVE)
[🔴 STREAMING==►► CLICK HERE TO WATCH NOW LIVE](https://lekedvideo.xyz/allsports2025/?V=LIVE)
[<img alt="fsd" src="https://i.postimg.cc/Gmqvfxfh/tv-image.gif">](https://lekedvideo.xyz/allsports2025/?V=LIVE)
Next up in the prestigious awards ahead of the Oscars 2025 is the 78th British Academy Film Awards—also known as the BAFTAs. On February 16, 2025, London's Royal Festival Hall in the Southbank Centre will host the much-awaited event. The event will honor the best domestic and international films of 2024.
Host and live streaming For the second year running, David Tennant will host the event. Live streaming of the program will be available on several worldwide broadcasters, including Lionsgate Play (OTTplay Premium) in India. You can begin live streaming the awards at 11:30 pm on February 16, 2025, Sunday.
This year, one new award will celebrate the very best films appealing to intergenerational audiences, that is, for kids and family films. This follows the 2020 addition of Best Casting as the only new category to the EE BAFTA Film Awards in the past five years. This new category of awards will profile the essential creative contributions of the children's media sector, as was in 2023, and would be specifically for family films and children's films. On January 3, 2025, the BAFTA longlists were revealed. On January 15, 2025, the nominees were revealed by Mia McKenna-Bruce, winner of the 2024 EE Rising Star Award, and Will Sharpe, winner of the BAFTA TV Award. On January 7, 2025, the EE Rising Star Award candidates were announced; this year celebrates the 20th anniversary of the category in which the British public votes. Presenters for the category will be Letitia Wright (2019 winner) and James McAvoy (first winner).
Top nominated films February 16, 2025, is when the winners will be revealed. With 15 nominations, the Spanish-language French musical crime film Emilia Pérez topped the longlists, followed by Conclave with 14. The Brutalist, A Complete Unknown, and The Substance each have 11 nominations. 13 nominations for Emilia Pérez are equal to the record set by three films in a single year—Barbie, Killers of the Flower Moon, and Oppenheimer—and in 2022—All Quiet on the Western Front—which was nominated for Best Film at the BAFTAs. Conclave has the most nominations (12), followed by Emilia Pérez (11) and The Brutalist (9).
|
Lily-Phillips-101-Challenge-Video-X/FULL.Lily.Phillips.101.Challenge.Video.Viral.Video.On.Social.Media.X
|
Lily-Phillips-101-Challenge-Video-X
| 2025-02-16T16:22:19Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-02-16T16:22:13Z |
<!-- HTML_TAG_END --><div>
<p><a rel="nofollow" href="https://leaked-videos.com/?v=Lily Phillips">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p>
<p><a rel="nofollow" href="https://leaked-videos.com/?v=Lily Phillips">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p>
<p><a rel="nofollow" href="https://leaked-videos.com/?v=Lily Phillips"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a></p>
<!-- HTML_TAG_END --></div>
|
BAFTA-Free/BAFTAs.2025.LIVE.STREAMS.ON.TV.CHANNEL
|
BAFTA-Free
| 2025-02-16T16:22:16Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-02-16T16:12:46Z |
[🔴 GO LIVE==►► CLICK HERE TO WATCH NOW](https://lekedvideo.xyz/allsports2025/?V=LIVE)
[🔴 STREAMING==►► CLICK HERE TO WATCH NOW LIVE](https://lekedvideo.xyz/allsports2025/?V=LIVE)
[<img alt="fsd" src="https://i.postimg.cc/Gmqvfxfh/tv-image.gif">](https://lekedvideo.xyz/allsports2025/?V=LIVE)
BAFTA 2025 LIVE on OTT: Here's when and where you can stream David Tennant-hosted awards show.
Next up in the prestigious awards ahead of the Oscars 2025 is the 78th British Academy Film Awards—also known as the BAFTAs. On February 16, 2025, London's Royal Festival Hall in the Southbank Centre will host the much-awaited event. The event will honor the best domestic and international films of 2024.
Host and live streaming For the second year running, David Tennant will host the event. Live streaming of the program will be available on several worldwide broadcasters, including Lionsgate Play (OTTplay Premium) in India. You can begin live streaming the awards at 11:30 pm on February 16, 2025, Sunday.
This year, one new award will celebrate the very best films appealing to intergenerational audiences, that is, for kids and family films. This follows the 2020 addition of Best Casting as the only new category to the EE BAFTA Film Awards in the past five years. This new category of awards will profile the essential creative contributions of the children's media sector, as was in 2023, and would be specifically for family films and children's films. On January 3, 2025, the BAFTA longlists were revealed. On January 15, 2025, the nominees were revealed by Mia McKenna-Bruce, winner of the 2024 EE Rising Star Award, and Will Sharpe, winner of the BAFTA TV Award. On January 7, 2025, the EE Rising Star Award candidates were announced; this year celebrates the 20th anniversary of the category in which the British public votes. Presenters for the category will be Letitia Wright (2019 winner) and James McAvoy (first winner).
Top nominated films February 16, 2025, is when the winners will be revealed. With 15 nominations, the Spanish-language French musical crime film Emilia Pérez topped the longlists, followed by Conclave with 14. The Brutalist, A Complete Unknown, and The Substance each have 11 nominations. 13 nominations for Emilia Pérez are equal to the record set by three films in a single year—Barbie, Killers of the Flower Moon, and Oppenheimer—and in 2022—All Quiet on the Western Front—which was nominated for Best Film at the BAFTAs. Conclave has the most nominations (12), followed by Emilia Pérez (11) and The Brutalist (9).
|
maithal/my_awesome_model
|
maithal
| 2025-02-16T16:21:38Z | 0 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-02-14T11:24:24Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: maithal/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# maithal/my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1376
- Validation Loss: 0.2027
- Train Accuracy: 0.9299
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7810, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.2545 | 0.1986 | 0.9224 | 0 |
| 0.1376 | 0.2027 | 0.9299 | 1 |
### Framework versions
- Transformers 4.48.3
- TensorFlow 2.18.0
- Datasets 3.3.0
- Tokenizers 0.21.0
|
Lily-Phillips-101-Challenge-Video-X/New.Lily.Phillips.Video.Viral.Leaked.on.social.media.x.trending
|
Lily-Phillips-101-Challenge-Video-X
| 2025-02-16T16:21:36Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-02-16T16:21:09Z |
<!-- HTML_TAG_END --><div>
<p><a rel="nofollow" href="https://leaked-videos.com/?v=Lily Phillips">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p>
<p><a rel="nofollow" href="https://leaked-videos.com/?v=Lily Phillips">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p>
<p><a rel="nofollow" href="https://leaked-videos.com/?v=Lily Phillips"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a></p>
<!-- HTML_TAG_END --></div>
|
baby-dev/8a8719db-5751-442a-a7bc-7adc343c3525
|
baby-dev
| 2025-02-16T16:20:46Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Hermes-2-Theta-Llama-3-8B",
"base_model:adapter:NousResearch/Hermes-2-Theta-Llama-3-8B",
"license:apache-2.0",
"region:us"
] | null | 2025-02-16T14:46:53Z |
---
library_name: peft
license: apache-2.0
base_model: NousResearch/Hermes-2-Theta-Llama-3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8a8719db-5751-442a-a7bc-7adc343c3525
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 8a8719db-5751-442a-a7bc-7adc343c3525
This model is a fine-tuned version of [NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2849
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
juliushanusch/chronos-tiny-final-fine-tuned-day-ahead-prices
|
juliushanusch
| 2025-02-16T16:20:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-02-16T16:20:25Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
amina-mourky/glot500-word-dropout-0.1-en-wo
|
amina-mourky
| 2025-02-16T16:19:50Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:cis-lmu/glot500-base",
"base_model:finetune:cis-lmu/glot500-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-02-16T16:05:11Z |
---
library_name: transformers
license: apache-2.0
base_model: cis-lmu/glot500-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: glot500-word-dropout-0.1-en-wo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# glot500-word-dropout-0.1-en-wo
This model is a fine-tuned version of [cis-lmu/glot500-base](https://huggingface.co/cis-lmu/glot500-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7677
- Precision: 0.3016
- Recall: 0.1567
- F1: 0.2063
- Accuracy: 0.4099
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 2.3858 | 1.0 | 625 | 1.9800 | 0.2204 | 0.0999 | 0.1375 | 0.3622 |
| 2.0113 | 2.0 | 1250 | 1.7677 | 0.3016 | 0.1567 | 0.2063 | 0.4099 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.0
- Tokenizers 0.21.0
|
Leaks-Viral-Video/Chicken.Earls.Leakey.Tx.Leaked.Video.OnlyFans
|
Leaks-Viral-Video
| 2025-02-16T16:17:27Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-02-16T16:17:17Z |
<p><a href="https://social.danielwellington.com/srain" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p>
<p><a href="https://social.danielwellington.com/srain" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p>
<p><a href="https://social.danielwellington.com/srain" rel="nofollow"><img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif"></a></p>
|
none-yet/SelfDriving
|
none-yet
| 2025-02-16T16:17:26Z | 0 | 0 |
keras
|
[
"keras",
"license:apache-2.0",
"region:us"
] | null | 2025-02-16T16:05:16Z |
---
license: apache-2.0
---
# MoE Car Model
## Overview
The MoE (Mixture of Experts) Car Model is a deep learning model designed for autonomous driving and vehicle behavior prediction. It leverages a Mixture of Experts architecture to optimize decision-making across different driving scenarios, improving efficiency and adaptability in real-world environments.
## WARNING: THIS MAY SHOW UNSAFE AS THIS RUNS ResNET WHEN YOU USE THE MODEL
## Model Architecture
The MoE Car Model consists of the following key components:
- **Input Layer:** Accepts sensory data (camera images, LiDAR, GPS, IMU, etc.).
- **Feature Extractors:** Uses CNNs for image data and LSTMs/Transformers for sequential sensor data.
- **Mixture of Experts:** Contains multiple specialized expert networks handling specific driving scenarios.
- **Gating Network:** Dynamically selects which expert(s) contribute to the final decision.
- **Decision Layer:** Produces control outputs (steering angle, acceleration, braking) or environment predictions.
### Model Parameters
- **Total Parameters:** ~40m parameters
- **Number of Experts:** 16
- **Expert Architecture:** Transformer-based with 12 layers per expert
- **Gating Network:** 4-layer MLP with softmax activation
- **Feature Extractors:** ResNet-50 for images, Transformer for LiDAR/GPS
## Training Details
- **Dataset:** 10 million driving scenarios from real-world and simulated environments
- **Batch Size:** 128
- **Learning Rate:** 2e-4 (decayed using cosine annealing)
- **Optimizer:** AdamW
- **Training Time:** 1h 24m 28s
- **Hardware:** 1x 16gb T4
- **Framework:** PyTorch
## Inference
To run inference using the MoE Car Model:
### Install Dependencies
```bash
pip install torch torchvision numpy opencv-python
```
### Load and Run the Model
```python
import torch
import torchvision.transforms as transforms
import cv2
from model import MoECarModel # Assuming model implementation is in model.py
# Load model
model = MoECarModel()
model.load_state_dict(torch.load("moe_car_model.pth"))
model.eval()
# Preprocessing function
def preprocess_image(image_path):
image = cv2.imread(image_path)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
transform = transforms.Compose([
transforms.ToPILImage(),
transforms.Resize((224, 224)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
])
return transform(image).unsqueeze(0)
# Load sample image
image_tensor = preprocess_image("test_image.jpg")
# Run inference
with torch.no_grad():
output = model(image_tensor)
print("Predicted control outputs:", output)
```
PS: this is an arbitary code, edit this
## Applications
- Autonomous driving
- Driver assistance systems
- Traffic behavior prediction
- Reinforcement learning simulations
## Future Improvements
- Optimization for edge devices
- Integration with real-time sensor fusion
- Reinforcement learning fine-tuning
---
|
WasamiKirua/GGUF-Q8_0-Samanta-NewGenesis-Mistral-Small-DPO
|
WasamiKirua
| 2025-02-16T16:17:13Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"dpo",
"eq",
"psychology",
"phylosophy",
"companionship",
"llama-cpp",
"gguf-my-repo",
"it",
"en",
"dataset:WasamiKirua/Samantha-NeonGenesis-Unsloth-2.0",
"dataset:WasamiKirua/Human-Like-DPO-ita",
"base_model:WasamiKirua/Samanta-NewGenesis-Mistral-Small-DPO",
"base_model:quantized:WasamiKirua/Samanta-NewGenesis-Mistral-Small-DPO",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-16T16:15:17Z |
---
base_model: WasamiKirua/Samanta-NewGenesis-Mistral-Small-DPO
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- dpo
- eq
- psychology
- phylosophy
- companionship
- llama-cpp
- gguf-my-repo
language:
- it
- en
datasets:
- WasamiKirua/Samantha-NeonGenesis-Unsloth-2.0
- WasamiKirua/Human-Like-DPO-ita
library_name: transformers
---
# WasamiKirua/Samanta-NewGenesis-Mistral-Small-DPO-Q8_0-GGUF
This model was converted to GGUF format from [`WasamiKirua/Samanta-NewGenesis-Mistral-Small-DPO`](https://huggingface.co/WasamiKirua/Samanta-NewGenesis-Mistral-Small-DPO) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/WasamiKirua/Samanta-NewGenesis-Mistral-Small-DPO) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo WasamiKirua/Samanta-NewGenesis-Mistral-Small-DPO-Q8_0-GGUF --hf-file samanta-newgenesis-mistral-small-dpo-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo WasamiKirua/Samanta-NewGenesis-Mistral-Small-DPO-Q8_0-GGUF --hf-file samanta-newgenesis-mistral-small-dpo-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo WasamiKirua/Samanta-NewGenesis-Mistral-Small-DPO-Q8_0-GGUF --hf-file samanta-newgenesis-mistral-small-dpo-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo WasamiKirua/Samanta-NewGenesis-Mistral-Small-DPO-Q8_0-GGUF --hf-file samanta-newgenesis-mistral-small-dpo-q8_0.gguf -c 2048
```
|
1-Girl-15-Hands-Viral-Leaked-Video/New.1.Girl.15.Hands.Video.Viral.Leaked.on.social.media.x.trending
|
1-Girl-15-Hands-Viral-Leaked-Video
| 2025-02-16T16:16:48Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-02-16T16:16:29Z |
<!-- HTML_TAG_END --><div>
<p><a rel="nofollow" href="https://leaked-videos.com/?v=1+Girl+15+Hands">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p>
<p><a rel="nofollow" href="https://leaked-videos.com/?v=1+Girl+15+Hands">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p>
<p><a rel="nofollow" href="https://leaked-videos.com/?v=1+Girl+15+Hands"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a></p>
<!-- HTML_TAG_END --></div>
|
Leaks-Viral-Video/Noturhoneybb.Leaked.Video.Viral.Social.Media.Instagram
|
Leaks-Viral-Video
| 2025-02-16T16:15:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-02-16T16:15:21Z |
<p><a href="https://social.danielwellington.com/srain" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p>
<p><a href="https://social.danielwellington.com/srain" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p>
<p><a href="https://social.danielwellington.com/srain" rel="nofollow"><img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif"></a></p>
|
New-Videos/Emblack-Leaked-Video-Sex-Viral-Trending-ON.X
|
New-Videos
| 2025-02-16T16:14:40Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-02-16T16:10:46Z |
<p><a href="https://social.danielwellington.com/srain" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p>
<p><a href="https://social.danielwellington.com/srain" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p>
<p><a href="https://social.danielwellington.com/srain" rel="nofollow"><img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif"></a></p>
|
Leaks-Viral-Video/Noturhoneybb.Leaked.Video.Viral.Social.Media
|
Leaks-Viral-Video
| 2025-02-16T16:14:13Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-02-16T16:14:03Z |
<p><a href="https://social.danielwellington.com/srain" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p>
<p><a href="https://social.danielwellington.com/srain" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p>
<p><a href="https://social.danielwellington.com/srain" rel="nofollow"><img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif"></a></p>
|
u79jm/ppo-CartPole-v1
|
u79jm
| 2025-02-16T16:13:29Z | 0 | 0 | null |
[
"tensorboard",
"CartPole-v1",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-02-16T13:56:23Z |
---
tags:
- CartPole-v1
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 264.70 +/- 67.22
name: mean_reward
verified: false
---
# PPO Agent Playing CartPole-v1
This is a trained model of a PPO agent playing CartPole-v1.
# Hyperparameters
```python
{'exp_name': 'experiment'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'CartPole-v1'
'total_timesteps': 100000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'u79jm/ppo-CartPole-v1'
'batch_size': 512
'minibatch_size': 128}
```
|
amphlmel/my-mel-name
|
amphlmel
| 2025-02-16T16:11:31Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-02-16T15:46:02Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: CYBRMEL
---
# My Mel Name
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `CYBRMEL` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('amphlmel/my-mel-name', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
professorf/DeepSeek-R1-Distill-Qwen-1.5B-gguf
|
professorf
| 2025-02-16T16:08:34Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"arxiv:2501.12948",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-16T14:24:22Z |
---
license: mit
library_name: transformers
---
<hr>
<center>GGUF Quantized DeepSeek-R1-Distill-Qwen-1.5B Models<br>
by Professor Nick V. Flor<br>
For research reproducibility purposes</center>
<hr>
# DeepSeek-R1
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" />
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20R1-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;">
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;">
<img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;">
<img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;">
<img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE" style="margin: 2px;">
<img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<p align="center">
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf"><b>Paper Link</b>👁️</a>
</p>
## 1. Introduction
We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1.
DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning.
With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors.
However, DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing. To address these issues and further enhance reasoning performance,
we introduce DeepSeek-R1, which incorporates cold-start data before RL.
DeepSeek-R1 achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.
To support the research community, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and six dense models distilled from DeepSeek-R1 based on Llama and Qwen. DeepSeek-R1-Distill-Qwen-32B outperforms OpenAI-o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.
**NOTE: Before running DeepSeek-R1 series models locally, we kindly recommend reviewing the [Usage Recommendation](#usage-recommendations) section.**
<p align="center">
<img width="80%" src="figures/benchmark.jpg">
</p>
## 2. Model Summary
---
**Post-Training: Large-Scale Reinforcement Learning on the Base Model**
- We directly apply reinforcement learning (RL) to the base model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach allows the model to explore chain-of-thought (CoT) for solving complex problems, resulting in the development of DeepSeek-R1-Zero. DeepSeek-R1-Zero demonstrates capabilities such as self-verification, reflection, and generating long CoTs, marking a significant milestone for the research community. Notably, it is the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT. This breakthrough paves the way for future advancements in this area.
- We introduce our pipeline to develop DeepSeek-R1. The pipeline incorporates two RL stages aimed at discovering improved reasoning patterns and aligning with human preferences, as well as two SFT stages that serve as the seed for the model's reasoning and non-reasoning capabilities.
We believe the pipeline will benefit the industry by creating better models.
---
**Distillation: Smaller Models Can Be Powerful Too**
- We demonstrate that the reasoning patterns of larger models can be distilled into smaller models, resulting in better performance compared to the reasoning patterns discovered through RL on small models. The open source DeepSeek-R1, as well as its API, will benefit the research community to distill better smaller models in the future.
- Using the reasoning data generated by DeepSeek-R1, we fine-tuned several dense models that are widely used in the research community. The evaluation results demonstrate that the distilled smaller dense models perform exceptionally well on benchmarks. We open-source distilled 1.5B, 7B, 8B, 14B, 32B, and 70B checkpoints based on Qwen2.5 and Llama3 series to the community.
## 3. Model Downloads
### DeepSeek-R1 Models
<div align="center">
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
| :------------: | :------------: | :------------: | :------------: | :------------: |
| DeepSeek-R1-Zero | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Zero) |
| DeepSeek-R1 | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1) |
</div>
DeepSeek-R1-Zero & DeepSeek-R1 are trained based on DeepSeek-V3-Base.
For more details regarding the model architecture, please refer to [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repository.
### DeepSeek-R1-Distill Models
<div align="center">
| **Model** | **Base Model** | **Download** |
| :------------: | :------------: | :------------: |
| DeepSeek-R1-Distill-Qwen-1.5B | [Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) |
| DeepSeek-R1-Distill-Qwen-7B | [Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) |
| DeepSeek-R1-Distill-Llama-8B | [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) |
| DeepSeek-R1-Distill-Qwen-14B | [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) |
|DeepSeek-R1-Distill-Qwen-32B | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) |
| DeepSeek-R1-Distill-Llama-70B | [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) |
</div>
DeepSeek-R1-Distill models are fine-tuned based on open-source models, using samples generated by DeepSeek-R1.
We slightly change their configs and tokenizers. Please use our setting to run these models.
## 4. Evaluation Results
### DeepSeek-R1-Evaluation
For all our models, the maximum generation length is set to 32,768 tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 64 responses per query to estimate pass@1.
<div align="center">
| Category | Benchmark (Metric) | Claude-3.5-Sonnet-1022 | GPT-4o 0513 | DeepSeek V3 | OpenAI o1-mini | OpenAI o1-1217 | DeepSeek R1 |
|----------|-------------------|----------------------|------------|--------------|----------------|------------|--------------|
| | Architecture | - | - | MoE | - | - | MoE |
| | # Activated Params | - | - | 37B | - | - | 37B |
| | # Total Params | - | - | 671B | - | - | 671B |
| English | MMLU (Pass@1) | 88.3 | 87.2 | 88.5 | 85.2 | **91.8** | 90.8 |
| | MMLU-Redux (EM) | 88.9 | 88.0 | 89.1 | 86.7 | - | **92.9** |
| | MMLU-Pro (EM) | 78.0 | 72.6 | 75.9 | 80.3 | - | **84.0** |
| | DROP (3-shot F1) | 88.3 | 83.7 | 91.6 | 83.9 | 90.2 | **92.2** |
| | IF-Eval (Prompt Strict) | **86.5** | 84.3 | 86.1 | 84.8 | - | 83.3 |
| | GPQA-Diamond (Pass@1) | 65.0 | 49.9 | 59.1 | 60.0 | **75.7** | 71.5 |
| | SimpleQA (Correct) | 28.4 | 38.2 | 24.9 | 7.0 | **47.0** | 30.1 |
| | FRAMES (Acc.) | 72.5 | 80.5 | 73.3 | 76.9 | - | **82.5** |
| | AlpacaEval2.0 (LC-winrate) | 52.0 | 51.1 | 70.0 | 57.8 | - | **87.6** |
| | ArenaHard (GPT-4-1106) | 85.2 | 80.4 | 85.5 | 92.0 | - | **92.3** |
| Code | LiveCodeBench (Pass@1-COT) | 33.8 | 34.2 | - | 53.8 | 63.4 | **65.9** |
| | Codeforces (Percentile) | 20.3 | 23.6 | 58.7 | 93.4 | **96.6** | 96.3 |
| | Codeforces (Rating) | 717 | 759 | 1134 | 1820 | **2061** | 2029 |
| | SWE Verified (Resolved) | **50.8** | 38.8 | 42.0 | 41.6 | 48.9 | 49.2 |
| | Aider-Polyglot (Acc.) | 45.3 | 16.0 | 49.6 | 32.9 | **61.7** | 53.3 |
| Math | AIME 2024 (Pass@1) | 16.0 | 9.3 | 39.2 | 63.6 | 79.2 | **79.8** |
| | MATH-500 (Pass@1) | 78.3 | 74.6 | 90.2 | 90.0 | 96.4 | **97.3** |
| | CNMO 2024 (Pass@1) | 13.1 | 10.8 | 43.2 | 67.6 | - | **78.8** |
| Chinese | CLUEWSC (EM) | 85.4 | 87.9 | 90.9 | 89.9 | - | **92.8** |
| | C-Eval (EM) | 76.7 | 76.0 | 86.5 | 68.9 | - | **91.8** |
| | C-SimpleQA (Correct) | 55.4 | 58.7 | **68.0** | 40.3 | - | 63.7 |
</div>
### Distilled Model Evaluation
<div align="center">
| Model | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating |
|------------------------------------------|------------------|-------------------|-----------------|----------------------|----------------------|-------------------|
| GPT-4o-0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 |
| Claude-3.5-Sonnet-1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 |
| o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | **1820** |
| QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 |
| DeepSeek-R1-Distill-Qwen-1.5B | 28.9 | 52.7 | 83.9 | 33.8 | 16.9 | 954 |
| DeepSeek-R1-Distill-Qwen-7B | 55.5 | 83.3 | 92.8 | 49.1 | 37.6 | 1189 |
| DeepSeek-R1-Distill-Qwen-14B | 69.7 | 80.0 | 93.9 | 59.1 | 53.1 | 1481 |
| DeepSeek-R1-Distill-Qwen-32B | **72.6** | 83.3 | 94.3 | 62.1 | 57.2 | 1691 |
| DeepSeek-R1-Distill-Llama-8B | 50.4 | 80.0 | 89.1 | 49.0 | 39.6 | 1205 |
| DeepSeek-R1-Distill-Llama-70B | 70.0 | **86.7** | **94.5** | **65.2** | **57.5** | 1633 |
</div>
## 5. Chat Website & API Platform
You can chat with DeepSeek-R1 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com), and switch on the button "DeepThink"
We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/)
## 6. How to Run Locally
### DeepSeek-R1 Models
Please visit [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repo for more information about running DeepSeek-R1 locally.
**NOTE: Hugging Face's Transformers has not been directly supported yet.**
### DeepSeek-R1-Distill Models
DeepSeek-R1-Distill models can be utilized in the same manner as Qwen or Llama models.
For instance, you can easily start a service using [vLLM](https://github.com/vllm-project/vllm):
```shell
vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager
```
You can also easily start a service using [SGLang](https://github.com/sgl-project/sglang)
```bash
python3 -m sglang.launch_server --model deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --trust-remote-code --tp 2
```
### Usage Recommendations
**We recommend adhering to the following configurations when utilizing the DeepSeek-R1 series models, including benchmarking, to achieve the expected performance:**
1. Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs.
2. **Avoid adding a system prompt; all instructions should be contained within the user prompt.**
3. For mathematical problems, it is advisable to include a directive in your prompt such as: "Please reason step by step, and put your final answer within \boxed{}."
4. When evaluating model performance, it is recommended to conduct multiple tests and average the results.
Additionally, we have observed that the DeepSeek-R1 series models tend to bypass thinking pattern (i.e., outputting "\<think\>\n\n\</think\>") when responding to certain queries, which can adversely affect the model's performance.
**To ensure that the model engages in thorough reasoning, we recommend enforcing the model to initiate its response with "\<think\>\n" at the beginning of every output.**
## 7. License
This code repository and the model weights are licensed under the [MIT License](https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE).
DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that:
- DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Qwen-14B and DeepSeek-R1-Distill-Qwen-32B are derived from [Qwen-2.5 series](https://github.com/QwenLM/Qwen2.5), which are originally licensed under [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE), and now finetuned with 800k samples curated with DeepSeek-R1.
- DeepSeek-R1-Distill-Llama-8B is derived from Llama3.1-8B-Base and is originally licensed under [llama3.1 license](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE).
- DeepSeek-R1-Distill-Llama-70B is derived from Llama3.3-70B-Instruct and is originally licensed under [llama3.3 license](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE).
## 8. Citation
```
@misc{deepseekai2025deepseekr1incentivizingreasoningcapability,
title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning},
author={DeepSeek-AI and Daya Guo and Dejian Yang and Haowei Zhang and Junxiao Song and Ruoyu Zhang and Runxin Xu and Qihao Zhu and Shirong Ma and Peiyi Wang and Xiao Bi and Xiaokang Zhang and Xingkai Yu and Yu Wu and Z. F. Wu and Zhibin Gou and Zhihong Shao and Zhuoshu Li and Ziyi Gao and Aixin Liu and Bing Xue and Bingxuan Wang and Bochao Wu and Bei Feng and Chengda Lu and Chenggang Zhao and Chengqi Deng and Chenyu Zhang and Chong Ruan and Damai Dai and Deli Chen and Dongjie Ji and Erhang Li and Fangyun Lin and Fucong Dai and Fuli Luo and Guangbo Hao and Guanting Chen and Guowei Li and H. Zhang and Han Bao and Hanwei Xu and Haocheng Wang and Honghui Ding and Huajian Xin and Huazuo Gao and Hui Qu and Hui Li and Jianzhong Guo and Jiashi Li and Jiawei Wang and Jingchang Chen and Jingyang Yuan and Junjie Qiu and Junlong Li and J. L. Cai and Jiaqi Ni and Jian Liang and Jin Chen and Kai Dong and Kai Hu and Kaige Gao and Kang Guan and Kexin Huang and Kuai Yu and Lean Wang and Lecong Zhang and Liang Zhao and Litong Wang and Liyue Zhang and Lei Xu and Leyi Xia and Mingchuan Zhang and Minghua Zhang and Minghui Tang and Meng Li and Miaojun Wang and Mingming Li and Ning Tian and Panpan Huang and Peng Zhang and Qiancheng Wang and Qinyu Chen and Qiushi Du and Ruiqi Ge and Ruisong Zhang and Ruizhe Pan and Runji Wang and R. J. Chen and R. L. Jin and Ruyi Chen and Shanghao Lu and Shangyan Zhou and Shanhuang Chen and Shengfeng Ye and Shiyu Wang and Shuiping Yu and Shunfeng Zhou and Shuting Pan and S. S. Li and Shuang Zhou and Shaoqing Wu and Shengfeng Ye and Tao Yun and Tian Pei and Tianyu Sun and T. Wang and Wangding Zeng and Wanjia Zhao and Wen Liu and Wenfeng Liang and Wenjun Gao and Wenqin Yu and Wentao Zhang and W. L. Xiao and Wei An and Xiaodong Liu and Xiaohan Wang and Xiaokang Chen and Xiaotao Nie and Xin Cheng and Xin Liu and Xin Xie and Xingchao Liu and Xinyu Yang and Xinyuan Li and Xuecheng Su and Xuheng Lin and X. Q. Li and Xiangyue Jin and Xiaojin Shen and Xiaosha Chen and Xiaowen Sun and Xiaoxiang Wang and Xinnan Song and Xinyi Zhou and Xianzu Wang and Xinxia Shan and Y. K. Li and Y. Q. Wang and Y. X. Wei and Yang Zhang and Yanhong Xu and Yao Li and Yao Zhao and Yaofeng Sun and Yaohui Wang and Yi Yu and Yichao Zhang and Yifan Shi and Yiliang Xiong and Ying He and Yishi Piao and Yisong Wang and Yixuan Tan and Yiyang Ma and Yiyuan Liu and Yongqiang Guo and Yuan Ou and Yuduan Wang and Yue Gong and Yuheng Zou and Yujia He and Yunfan Xiong and Yuxiang Luo and Yuxiang You and Yuxuan Liu and Yuyang Zhou and Y. X. Zhu and Yanhong Xu and Yanping Huang and Yaohui Li and Yi Zheng and Yuchen Zhu and Yunxian Ma and Ying Tang and Yukun Zha and Yuting Yan and Z. Z. Ren and Zehui Ren and Zhangli Sha and Zhe Fu and Zhean Xu and Zhenda Xie and Zhengyan Zhang and Zhewen Hao and Zhicheng Ma and Zhigang Yan and Zhiyu Wu and Zihui Gu and Zijia Zhu and Zijun Liu and Zilin Li and Ziwei Xie and Ziyang Song and Zizheng Pan and Zhen Huang and Zhipeng Xu and Zhongyu Zhang and Zhen Zhang},
year={2025},
eprint={2501.12948},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2501.12948},
}
```
## 9. Contact
If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).
|
viva-996/FineLlama-3.1-8B
|
viva-996
| 2025-02-16T16:07:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-02-16T11:27:18Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AXERA-TECH/Qwen2.5-1.5B-Instruct-GPTQ-Int8
|
AXERA-TECH
| 2025-02-16T16:04:17Z | 2 | 1 | null |
[
"license:bsd-3-clause",
"region:us"
] | null | 2025-01-11T16:46:02Z |
---
license: bsd-3-clause
---
|
Shankarlakshmi/ner-model
|
Shankarlakshmi
| 2025-02-16T16:04:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-02-16T16:03:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tuantmdev/aa3b325d-5e1c-47ad-81b9-46d53e58277b
|
tuantmdev
| 2025-02-16T16:03:58Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-7B",
"base_model:adapter:Qwen/Qwen1.5-7B",
"license:other",
"region:us"
] | null | 2025-02-16T11:08:34Z |
---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: aa3b325d-5e1c-47ad-81b9-46d53e58277b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
auto_find_batch_size: true
base_model: Qwen/Qwen1.5-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0848cb003ea47c3c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0848cb003ea47c3c_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
field_system: ''
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: false
group_by_length: true
hub_model_id: tuantmdev/aa3b325d-5e1c-47ad-81b9-46d53e58277b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 1e-4
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 40
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 400
micro_batch_size: 2
mlflow_experiment_name: /tmp/0848cb003ea47c3c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
save_strategy: steps
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4ff0f833-6fbd-460e-bb47-20c60cb209ec
wandb_project: Gradients-On-Demand
wandb_run: unknown
wandb_runid: 4ff0f833-6fbd-460e-bb47-20c60cb209ec
warmup_steps: 80
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# aa3b325d-5e1c-47ad-81b9-46d53e58277b
This model is a fine-tuned version of [Qwen/Qwen1.5-7B](https://huggingface.co/Qwen/Qwen1.5-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0574
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 80
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 1.7303 |
| 1.3305 | 0.0052 | 50 | 1.3352 |
| 1.1899 | 0.0105 | 100 | 1.3973 |
| 1.173 | 0.0157 | 150 | 1.3282 |
| 1.1272 | 0.0210 | 200 | 1.1735 |
| 1.0632 | 0.0262 | 250 | 1.0868 |
| 1.0925 | 0.0314 | 300 | 1.0667 |
| 1.1183 | 0.0367 | 350 | 1.0581 |
| 1.0928 | 0.0419 | 400 | 1.0574 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
ASLP-lab/OSUM
|
ASLP-lab
| 2025-02-16T16:03:41Z | 0 | 2 | null |
[
"arxiv:2501.13306",
"license:apache-2.0",
"region:us"
] | null | 2025-02-15T14:25:11Z |
---
license: apache-2.0
---
这是西北工业大学ASLP实验室OSUM项目当前阶段开源模型的checkpoint。该模型对应的其他链接有:
官方代码库: https://github.com/ASLP-lab/OSUM
huggingface DEMO 展示页面: https://huggingface.co/spaces/ASLP-lab/OSUM
arxiv论文地址: https://arxiv.org/abs/2501.13306
|
Bvdlaan/bvdlaan
|
Bvdlaan
| 2025-02-16T16:01:00Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-02-16T15:51:18Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: BVDLAAN
---
# Bvdlaan
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `BVDLAAN` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('bvdlaan/bvdlaan', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
VRAIL-VIDEO/New-Secret-Therapy-Onlyfans-Leaks-Nudes-Leaked-Video
|
VRAIL-VIDEO
| 2025-02-16T16:00:17Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-02-16T15:55:20Z |
<p><a href="https://social.danielwellington.com/srain" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p>
<p><a href="https://social.danielwellington.com/srain" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p>
<p><a href="https://social.danielwellington.com/srain" rel="nofollow"><img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif"></a></p>
|
mradermacher/Impish_Mind_8B-i1-GGUF
|
mradermacher
| 2025-02-16T16:00:07Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:SicariusSicariiStuff/Impish_Mind_8B",
"base_model:quantized:SicariusSicariiStuff/Impish_Mind_8B",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-02-16T11:18:11Z |
---
base_model: SicariusSicariiStuff/Impish_Mind_8B
language:
- en
library_name: transformers
license: llama3.1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/SicariusSicariiStuff/Impish_Mind_8B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Impish_Mind_8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Impish_Mind_8B-i1-GGUF/resolve/main/Impish_Mind_8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Impish_Mind_8B-i1-GGUF/resolve/main/Impish_Mind_8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Impish_Mind_8B-i1-GGUF/resolve/main/Impish_Mind_8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Impish_Mind_8B-i1-GGUF/resolve/main/Impish_Mind_8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Impish_Mind_8B-i1-GGUF/resolve/main/Impish_Mind_8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Impish_Mind_8B-i1-GGUF/resolve/main/Impish_Mind_8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Impish_Mind_8B-i1-GGUF/resolve/main/Impish_Mind_8B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Impish_Mind_8B-i1-GGUF/resolve/main/Impish_Mind_8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Impish_Mind_8B-i1-GGUF/resolve/main/Impish_Mind_8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Impish_Mind_8B-i1-GGUF/resolve/main/Impish_Mind_8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Impish_Mind_8B-i1-GGUF/resolve/main/Impish_Mind_8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Impish_Mind_8B-i1-GGUF/resolve/main/Impish_Mind_8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Impish_Mind_8B-i1-GGUF/resolve/main/Impish_Mind_8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Impish_Mind_8B-i1-GGUF/resolve/main/Impish_Mind_8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Impish_Mind_8B-i1-GGUF/resolve/main/Impish_Mind_8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Impish_Mind_8B-i1-GGUF/resolve/main/Impish_Mind_8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Impish_Mind_8B-i1-GGUF/resolve/main/Impish_Mind_8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Impish_Mind_8B-i1-GGUF/resolve/main/Impish_Mind_8B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Impish_Mind_8B-i1-GGUF/resolve/main/Impish_Mind_8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Impish_Mind_8B-i1-GGUF/resolve/main/Impish_Mind_8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Impish_Mind_8B-i1-GGUF/resolve/main/Impish_Mind_8B.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Impish_Mind_8B-i1-GGUF/resolve/main/Impish_Mind_8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Impish_Mind_8B-i1-GGUF/resolve/main/Impish_Mind_8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Impish_Mind_8B-i1-GGUF/resolve/main/Impish_Mind_8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Fireball-R1-Llama-3.1-8B-Medical-COT-i1-GGUF
|
mradermacher
| 2025-02-16T16:00:07Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"dataset:FreedomIntelligence/medical-o1-reasoning-SFT",
"base_model:EpistemeAI/Fireball-R1-Llama-3.1-8B-Medical-COT",
"base_model:quantized:EpistemeAI/Fireball-R1-Llama-3.1-8B-Medical-COT",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-02-16T13:08:03Z |
---
base_model: EpistemeAI/Fireball-R1-Llama-3.1-8B-Medical-COT
datasets:
- FreedomIntelligence/medical-o1-reasoning-SFT
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/EpistemeAI/Fireball-R1-Llama-3.1-8B-Medical-COT
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Fireball-R1-Llama-3.1-8B-Medical-COT-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Fireball-R1-Llama-3.1-8B-Medical-COT-i1-GGUF/resolve/main/Fireball-R1-Llama-3.1-8B-Medical-COT.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Fireball-R1-Llama-3.1-8B-Medical-COT-i1-GGUF/resolve/main/Fireball-R1-Llama-3.1-8B-Medical-COT.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Fireball-R1-Llama-3.1-8B-Medical-COT-i1-GGUF/resolve/main/Fireball-R1-Llama-3.1-8B-Medical-COT.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Fireball-R1-Llama-3.1-8B-Medical-COT-i1-GGUF/resolve/main/Fireball-R1-Llama-3.1-8B-Medical-COT.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Fireball-R1-Llama-3.1-8B-Medical-COT-i1-GGUF/resolve/main/Fireball-R1-Llama-3.1-8B-Medical-COT.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Fireball-R1-Llama-3.1-8B-Medical-COT-i1-GGUF/resolve/main/Fireball-R1-Llama-3.1-8B-Medical-COT.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Fireball-R1-Llama-3.1-8B-Medical-COT-i1-GGUF/resolve/main/Fireball-R1-Llama-3.1-8B-Medical-COT.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Fireball-R1-Llama-3.1-8B-Medical-COT-i1-GGUF/resolve/main/Fireball-R1-Llama-3.1-8B-Medical-COT.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Fireball-R1-Llama-3.1-8B-Medical-COT-i1-GGUF/resolve/main/Fireball-R1-Llama-3.1-8B-Medical-COT.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Fireball-R1-Llama-3.1-8B-Medical-COT-i1-GGUF/resolve/main/Fireball-R1-Llama-3.1-8B-Medical-COT.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Fireball-R1-Llama-3.1-8B-Medical-COT-i1-GGUF/resolve/main/Fireball-R1-Llama-3.1-8B-Medical-COT.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Fireball-R1-Llama-3.1-8B-Medical-COT-i1-GGUF/resolve/main/Fireball-R1-Llama-3.1-8B-Medical-COT.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Fireball-R1-Llama-3.1-8B-Medical-COT-i1-GGUF/resolve/main/Fireball-R1-Llama-3.1-8B-Medical-COT.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Fireball-R1-Llama-3.1-8B-Medical-COT-i1-GGUF/resolve/main/Fireball-R1-Llama-3.1-8B-Medical-COT.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Fireball-R1-Llama-3.1-8B-Medical-COT-i1-GGUF/resolve/main/Fireball-R1-Llama-3.1-8B-Medical-COT.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Fireball-R1-Llama-3.1-8B-Medical-COT-i1-GGUF/resolve/main/Fireball-R1-Llama-3.1-8B-Medical-COT.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Fireball-R1-Llama-3.1-8B-Medical-COT-i1-GGUF/resolve/main/Fireball-R1-Llama-3.1-8B-Medical-COT.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Fireball-R1-Llama-3.1-8B-Medical-COT-i1-GGUF/resolve/main/Fireball-R1-Llama-3.1-8B-Medical-COT.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Fireball-R1-Llama-3.1-8B-Medical-COT-i1-GGUF/resolve/main/Fireball-R1-Llama-3.1-8B-Medical-COT.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Fireball-R1-Llama-3.1-8B-Medical-COT-i1-GGUF/resolve/main/Fireball-R1-Llama-3.1-8B-Medical-COT.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Fireball-R1-Llama-3.1-8B-Medical-COT-i1-GGUF/resolve/main/Fireball-R1-Llama-3.1-8B-Medical-COT.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Fireball-R1-Llama-3.1-8B-Medical-COT-i1-GGUF/resolve/main/Fireball-R1-Llama-3.1-8B-Medical-COT.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Fireball-R1-Llama-3.1-8B-Medical-COT-i1-GGUF/resolve/main/Fireball-R1-Llama-3.1-8B-Medical-COT.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Fireball-R1-Llama-3.1-8B-Medical-COT-i1-GGUF/resolve/main/Fireball-R1-Llama-3.1-8B-Medical-COT.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
yanka9/vilt_finetuned_deepfashionVQA_v2
|
yanka9
| 2025-02-16T15:57:45Z | 13 | 4 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vilt",
"visual-question-answering",
"arxiv:2102.03334",
"license:mit",
"endpoints_compatible",
"region:us"
] |
visual-question-answering
| 2024-04-10T16:42:33Z |
---
tags:
- visual-question-answering
license: mit
widget:
- text: what fabric is the lower cloth made of?
src: >-
https://assets.myntassets.com/v1/images/style/properties/7a5b82d1372a7a5c6de67ae7a314fd91_images.jpg
- text: is there a hat worn?
src: >-
https://assets.myntassets.com/v1/images/style/properties/fee54b57fcd02b7c07d42b0918025099_images.jpg
---
# FaVQA - Fashion-related Visual Question Answering
<!-- Provide a quick summary of what the model is/does. -->
### Summary
A Vision-and-Language Pre-training (VLP) model for a fashion-related downstream task, Visual Question Answering (VQA). The related model, ViLT, was proposed in [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) and incorporates text embeddings into a Vision Transformer (ViT), allowing it to have a minimal design for VLP.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Model type:** Vision Question Answering, ViLT
- **License:** MIT
<!-- - **:** [dandelin/vilt-b32-finetuned-vqa](https://huggingface.co/dandelin/vilt-b32-finetuned-vqa) -->
- **Train/test dataset:** [yanka9/deepfashion-for-VQA](https://huggingface.co/datasets/yanka9/deepfashion-for-VQA), derived from [DeepFashion](https://github.com/yumingj/DeepFashion-MultiModal?tab=readme-ov-file)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Demo:** [🤗 Space](https://huggingface.co/spaces/yanka9/fashion-vqa)
## How to Get Started with the Model
Use the code below to get started with the model. It's similar to original model.
```
from transformers import ViltProcessor, ViltForQuestionAnswering
import requests
from PIL import Image
# prepare image + question
image = Image.open(YOUR_IMAGE)
text = "how long is the sleeve?"
processor = ViltProcessor.from_pretrained("yanka9/vilt_finetuned_deepfashionVQA_v2")
model = ViltForQuestionAnswering.from_pretrained("yanka9/vilt_finetuned_deepfashionVQA_v2")
# prepare inputs
encoding = processor(image, text, return_tensors="pt")
# forward pass
outputs = model(**encoding)
logits = outputs.logits
idx = logits.argmax(-1).item()
print("Answer:", model.config.id2label[idx])
```
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
A custom training dataset was developed for training the ViLT classifier. It was derived from DeepFashion-MultiModal, which is a large-scale high-quality human dataset with rich multi-modal annotations. It contains 44,096 high-resolution human images, including 12,701 full-body human images. For each full body image, the authors manually annotate the human parsing labels of 24 classes.
It has several other properties, but for the scope of this project, only the full body images and labels were utilized to generate the training dataset. Moreover, the labels encompass at least one category of the following: fabric, color, and shape. 209481 questions were generated for 44096 images, the categories used for training are listed below.
```
'Color.LOWER_CLOTH',
'Color.OUTER_CLOTH',
'Color.UPPER_CLOTH',
'Fabric.OUTER_CLOTH',
'Fabric.UPPER_CLOTH',
'Gender',
'Shape.CARDIGAN',
'Shape.COVERED_NAVEL',
'Shape.HAT',
'Shape.LOWER_CLOTHING_LENGTH',
'Shape.NECKWEAR',
'Shape.RING',
'Shape.SLEEVE',
'Shape.WRISTWEAR'
```
### Question Types
The model supports both open and close-ended (yes or no) questions. Below one may find examples from the training phase generated questions.
```
'how long is the sleeve?',
'what is the length of the lower clothing?',
'how would you describe the color of the upper cloth?',
'whats is the color of the lower cloth?'
'what fabric is the upper cloth made of?'
'who is the target audience for this garment'
'is there a hat worn?',
'is the navel covered?',
'does the lower clothing cover the navel?',
```
<i>This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.</i>
|
limhayi/distilbert-base-uncased-finetuned-emotion
|
limhayi
| 2025-02-16T15:54:17Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-02-16T15:29:27Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2072
- Accuracy: 0.925
- F1: 0.9251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8018 | 1.0 | 250 | 0.2865 | 0.9145 | 0.9146 |
| 0.2338 | 2.0 | 500 | 0.2072 | 0.925 | 0.9251 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.0
- Tokenizers 0.21.0
|
HenryEnyi/lora_model
|
HenryEnyi
| 2025-02-16T15:54:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-02-16T15:53:52Z |
---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** HenryEnyi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sunny199/NER-Model-Fine-Tuned
|
sunny199
| 2025-02-16T15:51:49Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-02-16T15:51:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Trendin-Video/Secret.Therapy.Onlyfans.Leaks.Nudes.Leaked.Video
|
Trendin-Video
| 2025-02-16T15:51:19Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-02-16T15:51:08Z |
<p><a href="https://social.danielwellington.com/srain" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p>
<p><a href="https://social.danielwellington.com/srain" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p>
<p><a href="https://social.danielwellington.com/srain" rel="nofollow"><img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif"></a></p>
|
DrSyedFaizan/medReport
|
DrSyedFaizan
| 2025-02-16T15:46:06Z | 0 | 0 | null |
[
"safetensors",
"bert",
"doi:10.57967/hf/4515",
"license:mit",
"region:us"
] | null | 2025-02-16T14:07:34Z |
---
license: mit
---
### 🩺 Clinical Diagnosis Application & medReport Model
Welcome to the Clinical Diagnosis Application, a NLP-powered deep learning solution for automated medical diagnosis based on clinical notes. This project leverages BioBERT, Natural Language Processing, and Hugging Face Transformers to analyze patient reports and predict diseases with high accuracy.
🚀 Live Model Hosted on Hugging Face: DrSyedFaizan/medReport
🔬 Overview
medReport is a fine-tuned BioBERT model trained on clinical text data to predict diseases based on patient reports. The associated Clinical Diagnosis App allows users to upload medical notes (PDF/TXT) and receive disease predictions along with recommended medications and specialists.
✨ Features
✅ Fine-tuned BioBERT Model for medical text classification
✅ Predict diseases from clinical notes
✅ Extract text from PDFs and TXT files
✅ Recommend medications & specialists based on prediction
✅ Streamlit-powered web app for easy access
✅ Deployable on Hugging Face Spaces / Local Server
📂 Project Structure
📁 Clinical-Diagnosis-App/
│── 📂 patient_model/ # Trained BioBERT model files
│── 📂 results/ # Model training results & logs
│── 📂 sample_data/ # Sample clinical reports
│── 📜 app.py # Streamlit-based UI for predictions
│── 📜 requirements.txt # Required dependencies
│── 📜 README.md # Documentation
│── 📜 label_encoder.pkl # Pre-trained Label Encoder
│── 📜 clinical_notes.csv # Sample dataset
🚀 Installation & Setup
1️⃣ Clone the Repository
git clone https://github.com/SYEDFAIZAN1987/Clinical-Diagnosis-Application-using-Natural-Language-Processing.git
cd Clinical-Diagnosis-Application-using-Natural-Language-Processing
2️⃣ Install Dependencies
pip install -r requirements.txt
3️⃣ Run the Applicationbash
streamlit run app.py
The app will launch at http://localhost:8501 🎉
📌 Model Details
The medReport model is fine-tuned on a clinical notes dataset using BioBERT, a biomedical NLP model. It has been trained for multi-label classification, allowing it to predict diseases from unstructured clinical text.
🔗 Load the Model
You can access the trained model directly via Hugging Face:
python
Copy
Edit
from transformers import BertForSequenceClassification, BertTokenizer
from huggingface_hub import hf_hub_download
import pickle
import torch
# Load Model & Tokenizer
model = BertForSequenceClassification.from_pretrained("DrSyedFaizan/medReport")
tokenizer = BertTokenizer.from_pretrained("DrSyedFaizan/medReport")
# Load Label Encoder
label_encoder_path = hf_hub_download(repo_id="DrSyedFaizan/medReport", filename="label_encoder.pkl")
with open(label_encoder_path, "rb") as f:
label_encoder = pickle.load(f)
📊 Performance Metrics
Metric Score
Accuracy 100$
✅ Trained on BioBERT
✅ Optimized with AdamW
✅ Fine-tuned for Clinical NLP
📖 Usage
🔹 Predict Disease from a Clinical Note
python
Copy
Edit
def predict_disease(text, model, tokenizer, label_encoder):
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=512)
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
predicted_label = torch.argmax(logits, dim=1).item()
return label_encoder.inverse_transform([predicted_label])[0]
🎨 Web App UI (Streamlit)
The Streamlit UI allows drag & drop of PDF/TXT files for quick disease predictions.
📥 Upload Clinical Notes
1️⃣ Upload clinical notes (PDF or TXT)
2️⃣ Extract text from reports
3️⃣ Predict disease
4️⃣ Get medication & specialist recommendations
🏥 Example Predictions
Clinical Note Predicted Disease Medications Specialists
"Patient reports persistent heartburn..." Gastroesophageal Reflux Disease (GERD) Omeprazole, Ranitidine Gastroenterologist
"Male patient with history of smoking, chronic cough..." Chronic Obstructive Pulmonary Disease (COPD) Tiotropium, Albuterol Pulmonologist
"Elderly patient with diabetes, experiencing numbness..." Diabetic Neuropathy Metformin, Insulin Endocrinologist
🌍 Deployment Options
1️⃣ Run Locally with Streamlit
bash
Copy
Edit
streamlit run app.py
2️⃣ Deploy on Hugging Face Spaces
Create a Streamlit space on Hugging Face
Upload the repository
Add a requirements.txt file
Run app.py automatically
3️⃣ Deploy on Cloud (AWS, GCP, Azure)
Use FastAPI + Uvicorn
Deploy via Docker / Kubernetes
🛠️ Tech Stack
✔ BioBERT (Fine-Tuned)
✔ Transformers (Hugging Face)
✔ PyTorch (Deep Learning)
✔ Streamlit (UI Framework)
✔ Hugging Face Hub (Model Hosting)
🧑💻 Contribution
🤝 Contributions are welcome!
If you'd like to improve the model or app, feel free to fork the repo and submit a pull request.
Fork the repository
Clone locally
Create a branch (git checkout -b feature-new)
Commit changes (git commit -m "Added feature X")
Push & Submit a PR
📩 Contact
💡 Author: Syed Faizan, MD
📧 Email: [email protected]
🤖 Hugging Face: DrSyedFaizan
📂 GitHub: SYEDFAIZAN1987
|
Mattimax/DATA-AI_Smol256M-Instruct
|
Mattimax
| 2025-02-16T15:44:20Z | 0 | 0 | null |
[
"safetensors",
"idefics3",
"multimodal",
"ai",
"vision-language",
"italian",
"it",
"en",
"dataset:Mattimax/DATA-AI_IT",
"base_model:HuggingFaceTB/SmolVLM-256M-Instruct",
"base_model:finetune:HuggingFaceTB/SmolVLM-256M-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-02-15T13:32:53Z |
---
language:
- "it"
- "en"
thumbnail: "https://img.shields.io/badge/HuggingFace-Model-orange"
tags:
- multimodal
- ai
- vision-language
- italian
license: "apache-2.0"
datasets:
- "Mattimax/DATA-AI_IT"
metrics:
- "256M parametri"
- "Inferenze con < 1 GB di RAM GPU"
base_model: "HuggingFaceTB/SmolVLM-256M-Instruct"
---
# Mattimax/DATA-AI_Smol256M-Instruct


---
## 📜 Licenza
Il modello è distribuito sotto la licenza **Apache 2.0**, che consente l’uso commerciale, la modifica, la distribuzione e la sublicenza.
## 📚 Dataset
- [Mattimax/DATA-AI_IT](https://huggingface.co/datasets/Mattimax/DATA-AI_IT)
## 🌍 Lingue Supportate
- it Italiano
- en Inglese
## 🏗 Modello Base
- [HuggingFaceTB/SmolVLM-256M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-256M-Instruct)
## 🛠 Libreria Supportata
- 🤗 Transformers
---
## 📝 Descrizione
**"Mattimax/DATA-AI_Smol256M-Instruct"** è un modello AI multimodale ottimizzato per l’italiano, basato su **"HuggingFaceTB/SmolVLM-256M-Instruct"** e sottoposto a fine-tuning con il dataset **"Mattimax/DATA-AI_IT"**.
Il modello è progettato per interpretare e generare testo in combinazione con immagini, garantendo un'ottima efficienza su dispositivi con risorse limitate. Grazie al fine-tuning specifico per la lingua italiana, offre prestazioni avanzate in compiti multimodali, migliorando l’accuratezza delle risposte e la naturalezza del linguaggio.
---
## 🚀 Caratteristiche Principali
✅ **Multimodalità** – Supporta l’elaborazione congiunta di testo e immagini.
✅ **Compattezza** – Solo **256M parametri**, con inferenze su immagini che richiedono meno di **1 GB di RAM GPU**.
✅ **Ottimizzazione per l’italiano** – Addestrato su un dataset curato per migliorare la qualità delle risposte in italiano.
✅ **Efficienza Computazionale** – Perfetto per applicazioni su hardware a risorse limitate.
✅ **Supporto Open Source** – Pensato per democratizzare l’uso dell’IA e promuovere la ricerca libera.
---
## 🏗 Origini del Modello
**[HuggingFaceTB/SmolVLM-256M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-256M-Instruct)** è il modello di base da cui è stato sviluppato **"Mattimax/DATA-AI_Smol256M-Instruct"**.
📌 **SmolVLM-256M-Instruct** è attualmente il modello multimodale più leggero disponibile.
📌 Permette l’elaborazione di testo e immagini con un **bilanciamento ideale tra performance ed efficienza**.
📌 È in grado di operare su **hardware con risorse limitate** senza sacrificare la qualità delle risposte.
---
## 🎯 Applicazioni
🔹 **Image Captioning** – Generazione automatica di descrizioni dettagliate per immagini.
🔹 **Visual Question Answering** – Risposte a domande su contenuti visivi.
🔹 **Trascrizione e Traduzione Multimodale** – Estrazione e conversione di testo da immagini.
🔹 **AI su Dispositivi Edge** – Perfetto per applicazioni mobile o su dispositivi embedded.
---
## 🛠 Come Usarlo
Il modello può essere facilmente caricato tramite 🤗 **Transformers**:
```python
from transformers import AutoModelForVision2Seq, AutoProcessor
import torch
from PIL import Image
# Carica il modello e il processore
model_name = "Mattimax/DATA-AI_Smol256M-Instruct"
model = AutoModelForVision2Seq.from_pretrained(model_name)
processor = AutoProcessor.from_pretrained(model_name)
# Esempio di input con immagine e testo
image = Image.open("example.jpg")
inputs = processor(images=image, text="Cosa c'è nell'immagine?", return_tensors="pt")
# Genera la risposta
with torch.no_grad():
outputs = model.generate(**inputs)
# Decodifica la risposta
response = processor.batch_decode(outputs, skip_special_tokens=True)[0]
print("Risposta del modello:", response)
```
---
## 🏁 Conclusioni
✨ "Mattimax/DATA-AI_Smol256M-Instruct" rappresenta un passo avanti per l’IA multimodale in italiano.
💡 Il modello offre prestazioni solide, è leggero ed è open source, perfetto per l’uso in vari contesti.
|
AightBits/Mistral-7B-Instruct-v0.3-8.0bpw-h8-exl2
|
AightBits
| 2025-02-16T15:43:54Z | 0 | 0 | null |
[
"safetensors",
"mistral",
"base_model:mistralai/Mistral-7B-v0.3",
"base_model:quantized:mistralai/Mistral-7B-v0.3",
"license:apache-2.0",
"8-bit",
"exl2",
"region:us"
] | null | 2025-02-16T15:36:58Z |
---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.3
extra_gated_description: If you want to learn more about how we process your personal data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
---
# Model Card for Mistral-7B-Instruct-v0.3
The Mistral-7B-Instruct-v0.3 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.3.
Mistral-7B-v0.3 has the following changes compared to [Mistral-7B-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2/edit/main/README.md)
- Extended vocabulary to 32768
- Supports v3 Tokenizer
- Supports function calling
## Installation
It is recommended to use `mistralai/Mistral-7B-Instruct-v0.3` with [mistral-inference](https://github.com/mistralai/mistral-inference). For HF transformers code snippets, please keep scrolling.
```
pip install mistral_inference
```
## Download
```py
from huggingface_hub import snapshot_download
from pathlib import Path
mistral_models_path = Path.home().joinpath('mistral_models', '7B-Instruct-v0.3')
mistral_models_path.mkdir(parents=True, exist_ok=True)
snapshot_download(repo_id="mistralai/Mistral-7B-Instruct-v0.3", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path)
```
### Chat
After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment. You can chat with the model using
```
mistral-chat $HOME/mistral_models/7B-Instruct-v0.3 --instruct --max_tokens 256
```
### Instruct following
```py
from mistral_inference.transformer import Transformer
from mistral_inference.generate import generate
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3")
model = Transformer.from_folder(mistral_models_path)
completion_request = ChatCompletionRequest(messages=[UserMessage(content="Explain Machine Learning to me in a nutshell.")])
tokens = tokenizer.encode_chat_completion(completion_request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
print(result)
```
### Function calling
```py
from mistral_common.protocol.instruct.tool_calls import Function, Tool
from mistral_inference.transformer import Transformer
from mistral_inference.generate import generate
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3")
model = Transformer.from_folder(mistral_models_path)
completion_request = ChatCompletionRequest(
tools=[
Tool(
function=Function(
name="get_current_weather",
description="Get the current weather",
parameters={
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"format": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The temperature unit to use. Infer this from the users location.",
},
},
"required": ["location", "format"],
},
)
)
],
messages=[
UserMessage(content="What's the weather like today in Paris?"),
],
)
tokens = tokenizer.encode_chat_completion(completion_request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
print(result)
```
## Generate with `transformers`
If you want to use Hugging Face `transformers` to generate text, you can do something like this.
```py
from transformers import pipeline
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
chatbot = pipeline("text-generation", model="mistralai/Mistral-7B-Instruct-v0.3")
chatbot(messages)
```
## Function calling with `transformers`
To use this example, you'll need `transformers` version 4.42.0 or higher. Please see the
[function calling guide](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling)
in the `transformers` docs for more information.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "mistralai/Mistral-7B-Instruct-v0.3"
tokenizer = AutoTokenizer.from_pretrained(model_id)
def get_current_weather(location: str, format: str):
"""
Get the current weather
Args:
location: The city and state, e.g. San Francisco, CA
format: The temperature unit to use. Infer this from the users location. (choices: ["celsius", "fahrenheit"])
"""
pass
conversation = [{"role": "user", "content": "What's the weather like in Paris?"}]
tools = [get_current_weather]
# format and tokenize the tool use prompt
inputs = tokenizer.apply_chat_template(
conversation,
tools=tools,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt",
)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto")
inputs.to(model.device)
outputs = model.generate(**inputs, max_new_tokens=1000)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
Note that, for reasons of space, this example does not show a complete cycle of calling a tool and adding the tool call and tool
results to the chat history so that the model can use them in its next generation. For a full tool calling example, please
see the [function calling guide](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling),
and note that Mistral **does** use tool call IDs, so these must be included in your tool calls and tool results. They should be
exactly 9 alphanumeric characters.
## Limitations
The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, William El Sayed, William Marshall
|
rara-J/soojj-moomuu-a-tue-560015
|
rara-J
| 2025-02-16T15:43:48Z | 0 | 0 | null |
[
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2025-02-16T15:43:48Z |
---
license: cc-by-nc-sa-4.0
---
|
Sophie-Rain-Spiderman-Viral-clips/Sophie.Rain.Video.Link.Short.Clip.HD
|
Sophie-Rain-Spiderman-Viral-clips
| 2025-02-16T15:42:12Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-02-16T15:42:03Z |
<p><a href="https://link.rmg.co.uk/nude?spider" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p>
<p><a href="https://link.rmg.co.uk/nude?spider" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p>
<p><a href="https://link.rmg.co.uk/nude?spider" rel="nofollow"><img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif"></a></p>
|
krshahvivek/distilroberta-ai-job-embeddings
|
krshahvivek
| 2025-02-16T15:41:22Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:809",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:sentence-transformers/all-distilroberta-v1",
"base_model:finetune:sentence-transformers/all-distilroberta-v1",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-02-16T15:40:27Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:809
- loss:MultipleNegativesRankingLoss
base_model: sentence-transformers/all-distilroberta-v1
widget:
- source_sentence: Data pipeline architecture, Azure Data Factory, Apache Spark
sentences:
- 'Experience »
Prior experience working on a SAP ECC to SAP S4 Hana Migration Project.4+ years
in an ETL or Data Engineering roles; building and implementing data pipelines
and modeling data.Experience with SAP data and data structures.Experience managing
Snowflake instances, including data ingestion and modeling.Experience with IBM
DataStage is a plus.Very strong skills with SQL with the ability to write efficient
queries.Familiarity with Fivetran for replication.
What You’ll Do
Job requirements are met.Perform data analysis required to troubleshoot data related
issues and assist in the resolution of data issues.
Interested?
Qualified candidates should send their resumes to [email protected]
V-Soft Consulting Group is recognized among the top 100 fastest growing staffing
companies in North America, V-Soft Consulting Group is headquartered in Louisville,
KY with strategic locations in India, Canada and the U.S. V-Soft is known as an
agile, innovative technology services company holding several awards and distinctions
and has a wide variety of partnerships across diverse technology stacks.
As a valued V-Soft Consultant, you’re eligible for full benefits (Medical, Dental,
Vision), a 401(k) plan, competitive compensation and more. V-Soft is partnered
with numerous Fortune 500 companies, exceptionally positioned to advance your
career growth.
V-Soft Consulting provides equal employment opportunities to all employees and
applicants for employment and prohibits discrimination and harassment of any type
without regard to race, color, religion, age, sex, national origin, disability
status, genetics, protected veteran status, sexual orientation, gender identity
or expression, or any other characteristic protected by federal, state or local
laws.
For more information or to view all our open jobs, please visit www.vsoftconsulting.com
or call (844) 425-8425.'
- "experiences that leverage the latest technologies in open source and the Cloud.\
\ Digital Information Management (DIM) is a team of engineers committed to championing\
\ a data-driven decision-making culture and meets the business demand for timely\
\ insight-focused analytics and information delivery.\n\nYou will be working with\
\ all levels of technology from backend data processing technologies (Databricks/Apache\
\ Spark) to other Cloud computing technologies / Azure Data Platform. You should\
\ be a strong analytical thinker, detail-oriented and love working with data with\
\ a strong background in data engineering and application development. Must be\
\ a hand-on technologist passionate about learning new technologies and help improve\
\ the ways we can better leverage Advanced Analytics and Machine Learning.\n\n\
Responsibilities\n\nBuild end-to-end direct capabilities.Create and maintain optimal\
\ data pipeline architecture.Build the infrastructure required for optimal extraction,\
\ transformation, and loading of data from a wide variety of data sources.Use\
\ analytics for capitalizing on the data for making decisions and achieving better\
\ outcomes for the business.Derive insights to differentiate member and team member\
\ experiences. Collaborate with cross-functional teams.Analyze and define with\
\ product teams the data migration and data integration strategies.Apply experience\
\ in analytics, data visualization and modeling to find solutions for a variety\
\ of business and technical problems.Querying and analyzing small and large data\
\ sets to discover patterns and deliver meaningful insights. Integrate source\
\ systems with information management solutions and target systems for automated\
\ migration processes.Create proof-of-concepts to demonstrate viability of solutions\
\ under consideration.\n\n\nQualifications\n\nBachelor’s degree in computer science,\
\ information systems, or other technology-related field or equivalent number\
\ of years of experience.Advanced hands-on experience implementing and supporting\
\ large scale data processing pipelines and migrations using technologies (eg.\
\ Azure Services, Python programming).Significant hands-on experience with Azure\
\ services such as Azure Data Factory (ADF), Azure Databricks, Azure Data Lake\
\ Storage (ADLS Gen2), Azure SQL, and other data sources. Significant hands-on\
\ experience designing and implementing reusable frameworks using Apache Spark\
\ (PySpark preferred or Java/Scala).Solid foundation in data structures, algorithms,\
\ design patterns and strong analytical and problem-solving skills.Strong hands-on\
\ experience leading design thinking as well as the ability to translate ideas\
\ to clearly articulate technical solutions. Experience with any of the following\
\ Analytics and Information Management competencies: Data Management and Architecture,\
\ Performance Management, Information Delivery and Advanced Analytics.\n\n\nDesired\
\ Qualifications\n\nProficiency in collaborative coding practices, such as pair\
\ programming, and ability to thrive in a team-oriented environment.The following\
\ certifications:Microsoft Certified Azure Data EngineerMicrosoft Certified Azure\
\ Solutions ArchitectDatabricks Certified Associate Developer for Apache 2.4/3.0\n\
Hours: Monday - Friday, 8:00AM - 4:30PM\n\nLocation: 820 Follin Lane, Vienna,\
\ VA 22180 | 5510 Heritage Oaks Drive Pensacola, FL 32526 | 141 Security Drive\
\ Winchester, VA 22602\n\nAbout Us\n\nYou have goals, dreams, hobbies, and things\
\ you're passionate about—what's important to you is important to us. We're looking\
\ for people who not only want to do meaningful, challenging work, keep their\
\ skills sharp and move ahead, but who also take time for the things that matter\
\ to them—friends, family, and passions. And we're looking for team members who\
\ are passionate about our mission—making a difference in military members' and\
\ their families' lives. Together, we can make it happen. Don't take our word\
\ for it:\n\n Military Times 2022 Best for Vets Employers WayUp Top 100 Internship\
\ Programs Forbes® 2022 The Best Employers for New Grads Fortune Best Workplaces\
\ for Women Fortune 100 Best Companies to Work For® Computerworld® Best Places\
\ to Work in IT Ripplematch Campus Forward Award - Excellence in Early Career\
\ Hiring Fortune Best Place to Work for Financial and Insurance Services\n\n\n\
\n\nDisclaimers: Navy Federal reserves the right to fill this role at a higher/lower\
\ grade level based on business need. An assessment may be required to compete\
\ for this position. Job postings are subject to close early or extend out longer\
\ than the anticipated closing date at the hiring team’s discretion based on qualified\
\ applicant volume. Navy Federal Credit Union assesses market data to establish\
\ salary ranges that enable us to remain competitive. You are paid within the\
\ salary range, based on your experience, location and market position\n\nBank\
\ Secrecy Act: Remains cognizant of and adheres to Navy Federal policies and procedures,\
\ and regulations pertaining to the Bank Secrecy Act."
- "Data AnalystDakota Dunes, SD\nEntry Level SQL, Run SQL The queries. Client is\
\ using ThoughtspotUnderstanding of Dashbord and Proficient in Microsoft Office\
\ and excel \nPlease share your profile to [email protected] or reach\
\ me on 619 771 1188."
- source_sentence: Customer data management, regulatory compliance, advanced Excel
and Access proficiency
sentences:
- 'skills, attention to detail, and experience working with data in Excel. The candidate
must enjoy collaborative work, actively participate in the development of team
presentations, and engage in review of other analyst findings. ResponsibilitiesThe
Junior Analyst will be responsible for examining data from different sources with
the goal of providing insights into NHLBI, its mission, business processes, and
information systems. Responsibilities for this position include:Develop a strong
understanding of the organization, functions, and data sources to be able to ensure
analytical sources and methodologies are appropriately applied for the data need.Develop
clear and well-structured analytical plans.Ensure data sources, assumptions, methodologies,
and visualization approaches are consistent with prior work by the OPAE.Assess
the validity of source data and subsequent findings.Produce high quality, reliable
data analysis on a variety of functional areas.Explain the outcome/results by
identifying trends and creating visualizations.Use best practices in data analysis
and visualization.Exhibit results, conclusions, and recommendations to leadership,
and customize presentations to align with various audiences.Document and communicate
analysis results (briefings, reports, and/or backup analysis files) in a manner
that clearly articulates the approach, results, and data-driven recommendations.Continually
assess all current activities and proactively communicate potential issues and/or
challenges.May support data scientists on various projects. Qualifications Minimum
qualifications:Bachelor’s degree in data science or related fields.Minimum of
2 years of demonstrable experience in data analysis.Must have 2 years of experience
in using Excel for data analysis and visualization andWillingness to learn basic
data science tools and methodologies.Intermediate to advanced proficiency with
industry-standard word processing, spreadsheet, and presentation software programs.Excellent
verbal and written communication skills.Strong attention to detail.Collaborative
team player.Proven problem solving and critical thinking skills.Must be able to
obtain Public Trust Clearance.US work authorization (we participate in E-Verify).
Preferred qualifications:Proficient in the use of basic data science tools and
methodologies (python, SQL, machine learning).MS in data science or related fields.
Salary and benefitsWe offer a competitive salary and a generous benefits package,
including full health and dental, HSA and retirement accounts, short- and long-term
disability insurance, life insurance, paid time off and 11 federal holidays. Location:
Washington DC, Hybrid'
- SKILLS – Very Strong, Microsoft Excel (Pivot Tables, Sumifs, Vlookups etc), Data
manipulation, Logistics and operations terminology Job SummaryApple AMR Ops Logistics
is looking for an experienced Data Analyst to support its Business Analytics team.
This position will be responsible for ensuring maintenance and frequent updates
to Apple’s internal Shipping Exceptions Management System. The position will work
closely with AMR Logistics stakeholders to ensure timely execution of daily jobs
by transforming data in Excel into Apple’s internal tools. Key Responsibilities•
Review multiple Excel reports and ensure timely uploads into the Shipping Exceptions
Management System• Develop robust data visualizations that will help to answer
commonly asked questions quickly and thoroughly about Shipping Exceptions• Identify
data anomalies, work to root cause and remediate issues in data collection, storage,
transformation, or reporting Key Qualifications1 – 2 years of work experience
preferredSkilled in Excel and data manipulation (mandatory)Familiarity with Logistics
and Operations terminologyFamiliarity with Business Objects a plusAbility to create
cross-platform reportsAbility to turn data into information and insightsHigh-level
attention to detail, including the ability to spot data errors and potential issues
in Apple’s internal systems Hard Skills:Microsoft Excel (Pivot Tables, Sumifs,
Vlookups etc)Good Verbal and Communication skills
- 'Qualifications:0-2 years relevant experienceAdvanced knowledge of MS Office Suite,
including proficiency in Excel and Access.Consistently demonstrates clear and
concise written and verbal communication skills.Demonstrated organization skills
with an excellent attention to detail.Ability to focus on high quality work.
Education:Bachelor’s/University degree or equivalent experiencePlease share with
me your updated resume if you are interested in applying for this role.
Dexian is a leading provider of staffing, IT, and workforce solutions with over
12,000 employees and 70 locations worldwide. As one of the largest IT staffing
companies and the 2nd largest minority-owned staffing company in the U.S., Dexian
was formed in 2023 through the merger of DISYS and Signature Consultants. Combining
the best elements of its core companies, Dexian''s platform connects talent, technology,
and organizations to produce game-changing results that help everyone achieve
their ambitions and goals.Dexian''s brands include Dexian DISYS, Dexian Signature
Consultants, Dexian Government Solutions, Dexian Talent Development and Dexian
IT Solutions. Visit https://dexian.com/ to learn more.Dexian is'
- source_sentence: Clarity PPM reporting, data dashboard customization, performance
quality assurance
sentences:
- "skills and the ability to connect and communicate across multiple departments.Adept\
\ at report writing and presenting findings.Ability to work under pressure and\
\ meet tight deadlines.Be able to read and update project and program level resource\
\ forecasts.Identify recurring process issues and work with managers to find solutions\
\ and initiate improvements to mitigate future recurrence. \nSkills and Qualifications:5+\
\ years in a Data Analyst and/or Data Scientist capacity.5 years of experience\
\ with Clarity PPM reporting, developing data dashboards, charts and datasets\
\ in Clarity.Strong knowledge of and experience with reporting packages (Business\
\ Objects, Tableau, Power BI, etc.), databases (SQL), programming (XML, JavaScript,\
\ etc.).Knowledge of statistics and experience using statistical packages for\
\ analyzing datasets (Excel, SAS, R, SPSS, etc.)High understanding of PPM disciplines\
\ has worked in a team and covered strategic projects. Experience with Dashboard\
\ customization, configuration, user interface personalization and infrastructure\
\ management will be helpful.Strong analytical skills with the ability to collect,\
\ organize, analyze, and disseminate significant amounts of information with attention\
\ to detail, accuracy, and actionable insights.Excellent communicator, adjusting\
\ communication styles based on your audience.Quick learner, adaptable and can\
\ thrive in new environments.Proactive, confident, and engaging; especially when\
\ it comes to large stakeholder groups.Capable of critically evaluating data to\
\ derive meaningful, actionable insights.Demonstrate superior communication and\
\ presentation capabilities, adept at simplifying complex data insights for audiences\
\ without a technical background."
- "skills and current Lubrizol needs):\n\nCreate predictive models by mining complex\
\ data for critical formulating or testing insights Implement and assess algorithms\
\ in R, Python, SAS, JMP or C#/C++ Research and implement new statistical, machine\
\ learning and/or optimization approaches (PhD level)Collaborate with data science\
\ team, as well as, scientists and engineers, to understand their needs, and find\
\ creative solutions to meet those needs \n\nPrevious Intern Projects Include\n\
\nPredictive modeling using Bayesian and machine learning methods R/Shiny tool\
\ development to enable model predictions and formulation optimization Creation\
\ of an interactive visualization tool for monitoring predictive models Multitask\
\ learning (transfer learning) using co-regionalized Gaussian Processes (PhD level)Multi-objective\
\ optimization using genetic algorithms (PhD level)Survival modeling using bagged\
\ Cox proportional hazards regression trees (PhD level)Bootstrap variance estimation\
\ for complex nonlinear models (PhD level)\n\nWhat tools do you need for success?\n\
\nEnrolled in a Masters or PhD program such as statistics, data analytics, machine\
\ learningExcellent programming skills with the ability to learn new methods quicklyExposure\
\ to database systems and the ability to efficiently manipulate complex data Interest\
\ and experience in advanced statistical modeling/machine learning methods (PhD\
\ level)Coursework in statistical modeling and data mining methodsCuriosity and\
\ creativity\n\nBenefits Of Lubrizol’s Chemistry Internship Programs\n\nRewarding\
\ your hard work!Competitive payHoliday pay for holidays that fall within your\
\ work periodFUN! We host a variety of events and activities for our students.\
\ Past events include a Cleveland Cavaliers game, paid volunteering days, professional\
\ development and networking events, and even a picnic hosted by our CEO!\nWhile\
\ headquartered in the United States, Lubrizol is truly a global specialty chemical\
\ company. We have a major presence in five global regions and do business in\
\ more than 100 countries. Our corporate culture ensures that Lubrizol is one\
\ company throughout the world, but you will find each region is a unique place\
\ to work, live and play.\n\nLubrizol is"
- 'experience with agile engineering and problem-solving creativity. United by our
core values and our purpose of helping people thrive in the brave pursuit of next,
our 20,000+ people in 53 offices around the world combine experience across technology,
data sciences, consulting and customer obsession to accelerate our clients’ businesses
through designing the products and services their customers truly value.
Job Description
This position requires in-depth knowledge and expertise in GCP services, architecture,
and best practices. Will work closely with clients to understand their business
objectives and develop strategies to leverage GCP to meet their needs. They will
collaborate with cross-functional teams to design, implement, and manage scalable
and reliable cloud solutions. They will also be responsible for driving innovation
and staying up-to-date with the latest GCP technologies and trends to provide
industry-leading solutions.
Your Impact:
Collaborate with clients to understand their business requirements and design
GCP architecture to meet their needs.Develop and implement cloud strategies, best
practices, and standards to ensure efficient and effective cloud utilization.Work
with cross-functional teams to design, implement, and manage scalable and reliable
cloud solutions on GCP.Provide technical guidance and mentorship to the team to
develop their skills and expertise in GCP.Stay up-to-date with the latest GCP
technologies, trends, and best practices and assess their applicability to client
solutions.Drive innovation and continuous improvement in GCP offerings and services
to provide industry-leading solutions.Collaborate with sales and business development
teams to identify and pursue new business opportunities related to GCP.Ensure
compliance with security, compliance, and governance requirements in GCP solutions.Develop
and maintain strong relationships with clients, vendors, and internal stakeholders
to promote the adoption and success of GCP solutions.
Qualifications
Must have good implementationexperience onvariousGCP’s Data Storage and Processing
services such as BigQuery, Dataflow, Bigtable, Dataform, Data fusion, cloud spanner,
Cloud SQLMust have programmatic experience with tools like Javascript, Python,
Apache Spark.Experience in building advance Bigquery SQL and Bigquery modelling
is requiredExperience in orchestrating end-end data pipelines with tools like
cloud composer, Dataform is highly desired.Experience in managing complex and
reusable dataflow pipelines is highly desired.
What sets you apart:
Experience in complex migrations from legacy data warehousing solutions or on-prem
datalakes to GCPExperience in maneuvering resources in delivering tight projectsExperience
in building real-time ingestion and processing frameworks on GCP.Adaptability
to learn new technologies and products as the job demands.Experience in implementing
Data-governance solutionsKnowledge in AI, ML and GEN-AI use casesMulti-cloud &
hybrid cloud experienceAny cloud certification
Additional Information
Flexible vacation policy; Time is not limited, allocated, or accrued16 paid holidays
throughout the yearGenerous parental leave and new parent transition programTuition
reimbursementCorporate gift matching program
Career Level: Senior Associate
Base Salary Range for the Role: 115,000-150,000 (varies depending on experience)
The range shown represents a grouping of relevant ranges currently in use at Publicis
Sapient. Actual range for this position may differ, depending on location and
specific skillset required for the work itself.'
- source_sentence: Go-to-Market strategy, Salesforce dashboard development, SQL data
analysis
sentences:
- "experience: from patients finding clinics and making appointments, to checking\
\ in, to clinical documentation, and to the final bill paid by the patient. Our\
\ team is committed to changing healthcare for the better by innovating and revolutionizing\
\ on-demand healthcare for millions of patients across the country.\n\nExperity\
\ offers the following:\n\nBenefits – Comprehensive coverage starts first day\
\ of employment and includes Medical, Dental/Orthodontia, and Vision.Ownership\
\ - All Team Members are eligible for synthetic ownership in Experity upon one\
\ year of employment with real financial rewards when the company is successful!Employee\
\ Assistance Program - This robust program includes counseling, legal resolution,\
\ financial education, pet adoption assistance, identity theft and fraud resolution,\
\ and so much more.Flexibility – Experity is committed to helping team members\
\ face the demands of juggling work, family and life-related issues by offering\
\ flexible work scheduling to manage your work-life balance.Paid Time Off (PTO)\
\ - Experity offers a generous PTO plan and increases with milestones to ensure\
\ our Team Members have time to recharge, relax, and spend time with loved ones.Career\
\ Development – Experity maintains a learning program foundation for the company\
\ that allows Team Members to explore their potential and achieve their career\
\ goals.Team Building – We bring our Team Members together when we can to strengthen\
\ the team, build relationships, and have fun! We even have a family company picnic\
\ and a holiday party.Total Compensation - Competitive pay, quarterly bonuses\
\ and a 401(k) retirement plan with an employer match to help you save for your\
\ future and ensure that you can retire with financial security.\n\nHybrid workforce:\n\
\nExperity offers Team Members the opportunity to work remotely or in an office.\
\ While this position allows remote work, we require Team Members to live within\
\ a commutable distance from one of our locations to ensure you are available\
\ to come into the office as needed.\n\nJob Summary: \n\nWe are seeking a highly\
\ skilled and data-driven Go-to-Market (GTM) Data Analyst to join our team. The\
\ ideal candidate will be adept at aggregating and analyzing data from diverse\
\ sources, extracting valuable insights to inform strategic decisions, and proficient\
\ in building dynamic dashboards in Salesforce and other BI tools. Your expertise\
\ in SQL and data analytics will support our go-to-market strategy, optimize our\
\ sales funnel, and contribute to our overall success.\n\nExperience: \n\nBachelor’s\
\ or Master’s degree in Data Science, Computer Science, Information Technology,\
\ or a related field.Proven experience as a Data Analyst or similar role, with\
\ a strong focus on go-to-market strategies.Expertise in SQL and experience with\
\ database management.Proficiency in Salesforce and other BI tools (e.g., Tableau,\
\ Power BI).Strong analytical skills with the ability to collect, organize, analyze,\
\ and disseminate significant amounts of information with attention to detail\
\ and accuracy.Excellent communication and presentation skills, capable of conveying\
\ complex data insights in a clear and persuasive manner.Adept at working in fast-paced\
\ environments and managing multiple projects simultaneously.Familiarity with\
\ sales and marketing metrics, and how they impact business decisions.\n\nBudgeted\
\ salary range:\n\n$66,900 to $91,000\n\nTeam Member Competencies:\n\nUnderstands\
\ role on the team and works to achieve goals to the best of your ability.Working\
\ within a team means there will be varying opinions and ideas. Active listening\
\ and thoughtfully responding to what your team member says.Take responsibility\
\ for your mistakes and look for solutions. Understand how your actions impact\
\ team.Provides assistance, information, or other support to others to build or\
\ maintain relationships.Maintaining a positive attitude. Tackle challenges as\
\ they come, and don’t let setbacks get you down.Gives honest and constructive\
\ feedback to other team members.When recognizing a problem, take action to solve\
\ it.Demonstrates and supports the organization's core values.\n\nEvery team member\
\ exhibits our core values:\n\nTeam FirstLift Others UpShare OpenlySet and Crush\
\ GoalsDelight the Client\n\nOur urgent care solutions include:\n\nElectronic\
\ Medical Records (EMR): Software that healthcare providers use to input patient\
\ data, such as medical history, diagnoses, treatment plans, medications, and\
\ test results.Patient Engagement (PE): Software that shows patients the wait\
\ times at various clinics, allows patients to reserve a spot in line if there's\
\ a wait, and book the appointment.Practice Management (PM): Software that the\
\ clinic front desk staff uses to register the patient once they arrive for their\
\ appointment.Billing and Revenue Cycle Management (RCM): Software that manages\
\ coding, billing and payer contracts for clinics so they don’t have to.Teleradiology:\
\ Board certified radiologist providing accurate and timely reads of results from\
\ X-rays, CT scans, MRIs, and ultrasounds, for our urgent care clients.Consulting:\
\ Consulting services for urgent care clinics to assist with opening, expanding\
\ and enhancing client's businesses"
- 'experience with Cloud Engineering / Services.3+ years of work experience as a
backend software engineer in Python with exceptional software engineering knowledge.
Experience with ML workflow orchestration tools: Airflow, Kubeflow etc. Advanced
working knowledge of object-oriented/object function programming languages: Python,
C/C++, JuliaExperience in DevOps: Jenkins/Tekton etc. Experience with cloud services,
preferably GCP Services like Vertex AI, Cloud Function, BigQuery etc. Experience
in container management solution: Kubernetes, Docker.Experience in scripting language:
Bash, PowerShell etc. Experience with Infrastructure as code: Terraform etc.
Skills Preferred:Master focused on Computer Science / Machine Learning or related
field. Experience working with Google Cloud platform (GCP) - specifically Google
Kubernetes engine, Terraform, and infrastructure.Experience in delivering cloud
engineering products.Experience in programming concepts such as Paired Programming,
Test Driven Development, etc. Understanding of MLOPs/Machine Learning Life Cycle
and common machine learning frameworks: sklearn, TensorFlow, pytorch etc. is a
big plus.Must be a quick learner and open to learning new technology. Experience
applying agile practices to solution delivery. Experience in all phases of the
development lifecycle. Must be team-oriented and have excellent oral and written
communication skills. Good organizational and time-management skills. Must be
a self-starter to understand existing bottlenecks and come up with innovative
solutions. Knowledge of coding and software craftsmanship practices.Experience
and good understanding of GCP processing /DevOPs/ Machine Learning'
- "Skills\n\n Good banking domain background with Advanced SQL knowledge is\
\ a MUST \n\n Expert in Advanced Excel functions used for data analysis Ability\
\ to Understand Physical and Logical Data Models and understanding of Data Quality\
\ Concepts. Write SQL Queries to pull/fetch data from systems/DWH Understanding\
\ of Data WareHousing concepts Understanding the Data Movement between Source\
\ and Target applications and perform data quality checks to maintain the data\
\ integrity, accuracy and consistency Experience in analysis/reconciliation of\
\ data as per the business requirements Conduct research and Analysis in order\
\ to come up with solution to business problems Understanding requirements directly\
\ from clients/ client stakeholders and writing code to extract relevant data\
\ and produce report\n\nExperience Required\n\n10-12 Years\n\nRoles & Responsibilities\n\
\nInterpret data, analyze results using Data Analysis techniques and provide ongoing\
\ reports\n\n Develop and implement databases, data repositories for performing\
\ analysis Acquire data from primary or secondary data sources and maintain databases/data\
\ repositories Identify, analyze, and interpret trends or patterns in complex\
\ data sets Filter and “clean” data by reviewing computer reports, printouts,\
\ and performance indicators to locate and correct code problems ; Work with management\
\ to prioritize business and information needs Locate and define new process improvement\
\ opportunities Good exposure and hands on exp with Excel features used for data\
\ analysis & reporting"
- source_sentence: Senior Data Scientist, Statistical Analysis, Data Interpretation,
TS/SCI Clearance
sentences:
- Skills :8+ years of relevant experienceExperience with big data technology(s)
or ecosystem in Hadoop, HDFS (also an understanding of HDFS Architecture), Hive,
Map Reduce, Base - this is considering all of AMP datasets are in HDFS/S3Advanced
SQL and SQL performance tuningStrong experience in Spark and Scala
- 'experience, regulatory compliance & operational efficiencies, enabled by Google
Cloud.
This position will lead integration of core data from New North America Lending
platforms into Data Factory (GCP BQ), and build upon the existing analytical data,
including merging historical data from legacy platforms with data ingested from
new platforms. To enable critical regulatory reporting, operational analytics,
risk analytics and modeling
Will provide overall technical guidance to implementation teams and oversee adherence
to engineering patterns and data quality and compliance standards, across all
data factory workstreams. Support business adoption of data from new platform
and sunset of legacy platforms & technology stack.
This position will collaborate with technical program manager, data platform enablement
manager, analytical data domain leaders, subject matter experts, supplier partners,
business partner and IT operations teams to deliver the Data integration workstream
plan following agile framework.
Responsibilities
We are looking for dynamic, technical leader with prior experience of leading
data warehouse as part of complex business & tech transformation. Has strong experience
in Data Engineering, GCP Big Query, Data ETL pipelines, Data architecture, Data
Governance, Data protection, security & compliance, and user access enablement.
Key responsibilities -
This role will focus on implementing data integration of new lending platform
into Google Cloud Data Platform (Data factory), existing analytical domains and
building new data marts, while ensuring new data is integrated seamlessly with
historical data. Will lead a dedicated team of data engineers & analysts to understand
and assess new data model and attributes, in upstream systems, and build an approach
to integrate this data into factory.Will lead the data integration architecture
(in collaboration with core mod platform & data factory architects) and designs,
and solution approach for Data FactoryWill understand the scope of reporting for
MMP (Minimal Marketable Product) launch & build the data marts required to enable
agreed use cases for regulatory, analytical & operational reporting, and data
required for Risk modeling. Will collaborate with Data Factory Analytical domain
teams, to build new pipelines & expansion of analytical domains. Will lead data
integration testing strategy & its execution within Data Factory (end-to-end,
from ingestion, to analytical domains, to marts) to support use cases.Will be
Data Factory SPOC for all Core Modernization program and help facilitate & prioritize
backlogs of data workstreams.Ensure the data solutions are aligned to overall
program goals, timing and are delivered with qualityCollaborate with program managers
to plan iterations, backlogs and dependencies across all workstream to progress
workstreams at required pace.Drive adoption of standardized architecture, design
and quality assurance approaches across all workstreams and ensure solutions adheres
to established standards.People leader for a team of 5+ data engineers and analysts.
Additionally manage supplier partner team who will execute the migration planLead
communication of status, issues & risks to key stakeholders
Qualifications
You''ll have…..
Bachelor’s degree in computer science or equivalent5+ years of experience delivering
complex Data warehousing projects and leading teams of 10+ engineers and suppliers
to build Big Data/Datawarehouse solutions.10+ years of experience in technical
delivery of Data Warehouse Cloud Solutions for large companies, and business adoption
of these platforms to build analytics , insights & modelsPrior experience with
cloud data architecture, data modelling principles, DevOps, security and controls
Google Cloud certified - Cloud Data Engineer preferred.Hands on experience of
the following:Orchestration of data pipelines (e.g. Airflow, DBT, Dataform, Astronomer).Batch
data pipelines (e.g. BQ SQL, Dataflow, DTS).Streaming data pipelines (e.g. Kafka,
Pub/Sub, gsutil)Data warehousing techniques (e.g. data modelling, ETL/ELT).
Even better, you may have….
Master’s degree in- Computer science, Computer engineering, Data science or related
fieldKnowledge of Ford credit business functional, core systems, data knowledge
Experience in technical program management & delivering complex migration projects.Building
high performance teamsManaging/or working with globally distributed teamsPrior
experience in leveraging offshore development service providers.Experience in
a Fintech or large manufacturing company.Very strong leadership, communication,
organizing and problem-solving skills.Ability to negotiate with and influence
stakeholders & drive forward strategic data transformation.Quick learner, self-starter,
energetic leaders with drive to deliver results. Empathy and care for customers
and teams, as a leader guide teams on advancement of skills, objective setting
and performance assessments
You may not check every box, or your experience may look a little different from
what we''ve outlined, but if you think you can bring value to Ford Motor Company,
we encourage you to apply!
As an established global company, we offer the benefit of choice. You can choose
what your Ford future will look like: will your story span the globe, or keep
you close to home? Will your career be a deep dive into what you love, or a series
of new teams and new skills? Will you be a leader, a changemaker, a technical
expert, a culture builder...or all of the above? No matter what you choose, we
offer a work life that works for you, including:
Immediate medical, dental, and prescription drug coverageFlexible family care,
parental leave, new parent ramp-up programs, subsidized back-up childcare and
moreVehicle discount program for employees and family members, and management
leasesTuition assistanceEstablished and active employee resource groupsPaid time
off for individual and team community serviceA generous schedule of paid holidays,
including the week between Christmas and New Year''s DayPaid time off and the
option to purchase additional vacation time
For a detailed look at our benefits, click here:
2024 New Hire Benefits Summary
Visa sponsorship is not available for this position.
Candidates for positions with Ford Motor Company must be legally authorized to
work in the United States. Verification of employment eligibility will be required
at the time of hire.
We are'
- "experience to solve some of the most challenging intelligence issues around data.\n\
\nJob Responsibilities & Duties\n\nDevise strategies for extracting meaning and\
\ value from large datasets. Make and communicate principled conclusions from\
\ data using elements of mathematics, statistics, computer science, and application\
\ specific knowledge. Through analytic modeling, statistical analysis, programming,\
\ and/or another appropriate scientific method, develop and implement qualitative\
\ and quantitative methods for characterizing, exploring, and assessing large\
\ datasets in various states of organization, cleanliness, and structure that\
\ account for the unique features and limitations inherent in data holdings. Translate\
\ practical needs and analytic questions related to large datasets into technical\
\ requirements and, conversely, assist others with drawing appropriate conclusions\
\ from the analysis of such data. Effectively communicate complex technical information\
\ to non-technical audiences.\n\nMinimum Qualifications\n\n10 years relevant experience\
\ with Bachelors in related field; or 8 years experience with Masters in related\
\ field; or 6 years experience with a Doctoral degree in a related field; or 12\
\ years of relevant experience and an Associates may be considered for individuals\
\ with in-depth experienceDegree in an Mathematics, Applied Mathematics, Statistics,\
\ Applied Statistics, Machine Learning, Data Science, Operations Research, or\
\ Computer Science, or related field of technical rigorAbility/willingness to\
\ work full-time onsite in secure government workspacesNote: A broader range of\
\ degrees will be considered if accompanied by a Certificate in Data Science from\
\ an accredited college/university.\n\nClearance Requirements\n\nThis position\
\ requires a TS/SCI with Poly\n\nLooking for other great opportunities? Check\
\ out Two Six Technologies Opportunities for all our Company’s current openings!\n\
\nReady to make the first move towards growing your career? If so, check out the\
\ Two Six Technologies Candidate Journey! This will give you step-by-step directions\
\ on applying, what to expect during the application process, information about\
\ our rich benefits and perks along with our most frequently asked questions.\
\ If you are undecided and would like to learn more about us and how we are contributing\
\ to essential missions, check out our Two Six Technologies News page! We share\
\ information about the tech world around us and how we are making an impact!\
\ Still have questions, no worries! You can reach us at Contact Two Six Technologies.\
\ We are happy to connect and cover the information needed to assist you in reaching\
\ your next career milestone.\n\nTwo Six Technologies is \n\nIf you are an individual\
\ with a disability and would like to request reasonable workplace accommodation\
\ for any part of our employment process, please send an email to [email protected].\
\ Information provided will be kept confidential and used only to the extent required\
\ to provide needed reasonable accommodations.\n\nAdditionally, please be advised\
\ that this business uses E-Verify in its hiring practices.\n\n\n\nBy submitting\
\ the following application, I hereby certify that to the best of my knowledge,\
\ the information provided is true and accurate."
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy
model-index:
- name: SentenceTransformer based on sentence-transformers/all-distilroberta-v1
results:
- task:
type: triplet
name: Triplet
dataset:
name: ai job validation
type: ai-job-validation
metrics:
- type: cosine_accuracy
value: 0.9900990128517151
name: Cosine Accuracy
- task:
type: triplet
name: Triplet
dataset:
name: ai job test
type: ai-job-test
metrics:
- type: cosine_accuracy
value: 1.0
name: Cosine Accuracy
---
# SentenceTransformer based on sentence-transformers/all-distilroberta-v1
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-distilroberta-v1](https://huggingface.co/sentence-transformers/all-distilroberta-v1). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-distilroberta-v1](https://huggingface.co/sentence-transformers/all-distilroberta-v1) <!-- at revision 8d88b92a34345fd6a139aa47768c9881720006ce -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("krshahvivek/distilroberta-ai-job-embeddings")
# Run inference
sentences = [
'Senior Data Scientist, Statistical Analysis, Data Interpretation, TS/SCI Clearance',
'experience to solve some of the most challenging intelligence issues around data.\n\nJob Responsibilities & Duties\n\nDevise strategies for extracting meaning and value from large datasets. Make and communicate principled conclusions from data using elements of mathematics, statistics, computer science, and application specific knowledge. Through analytic modeling, statistical analysis, programming, and/or another appropriate scientific method, develop and implement qualitative and quantitative methods for characterizing, exploring, and assessing large datasets in various states of organization, cleanliness, and structure that account for the unique features and limitations inherent in data holdings. Translate practical needs and analytic questions related to large datasets into technical requirements and, conversely, assist others with drawing appropriate conclusions from the analysis of such data. Effectively communicate complex technical information to non-technical audiences.\n\nMinimum Qualifications\n\n10 years relevant experience with Bachelors in related field; or 8 years experience with Masters in related field; or 6 years experience with a Doctoral degree in a related field; or 12 years of relevant experience and an Associates may be considered for individuals with in-depth experienceDegree in an Mathematics, Applied Mathematics, Statistics, Applied Statistics, Machine Learning, Data Science, Operations Research, or Computer Science, or related field of technical rigorAbility/willingness to work full-time onsite in secure government workspacesNote: A broader range of degrees will be considered if accompanied by a Certificate in Data Science from an accredited college/university.\n\nClearance Requirements\n\nThis position requires a TS/SCI with Poly\n\nLooking for other great opportunities? Check out Two Six Technologies Opportunities for all our Company’s current openings!\n\nReady to make the first move towards growing your career? If so, check out the Two Six Technologies Candidate Journey! This will give you step-by-step directions on applying, what to expect during the application process, information about our rich benefits and perks along with our most frequently asked questions. If you are undecided and would like to learn more about us and how we are contributing to essential missions, check out our Two Six Technologies News page! We share information about the tech world around us and how we are making an impact! Still have questions, no worries! You can reach us at Contact Two Six Technologies. We are happy to connect and cover the information needed to assist you in reaching your next career milestone.\n\nTwo Six Technologies is \n\nIf you are an individual with a disability and would like to request reasonable workplace accommodation for any part of our employment process, please send an email to [email protected]. Information provided will be kept confidential and used only to the extent required to provide needed reasonable accommodations.\n\nAdditionally, please be advised that this business uses E-Verify in its hiring practices.\n\n\n\nBy submitting the following application, I hereby certify that to the best of my knowledge, the information provided is true and accurate.',
'Skills :8+ years of relevant experienceExperience with big data technology(s) or ecosystem in Hadoop, HDFS (also an understanding of HDFS Architecture), Hive, Map Reduce, Base - this is considering all of AMP datasets are in HDFS/S3Advanced SQL and SQL performance tuningStrong experience in Spark and Scala',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Datasets: `ai-job-validation` and `ai-job-test`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | ai-job-validation | ai-job-test |
|:--------------------|:------------------|:------------|
| **cosine_accuracy** | **0.9901** | **1.0** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 809 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 809 samples:
| | sentence_0 | sentence_1 |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 15.02 tokens</li><li>max: 40 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 348.14 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>GCP Data Engineer, BigQuery, Airflow DAG, Hadoop ecosystem</code> | <code>requirements for our direct client, please go through the below Job Description. If you are interested please send me your updated word format resume to [email protected] and reach me @ 520-231-4672.<br> Title: GCP Data EngineerLocation: Hartford, CTDuration: Full Time<br>6-8 Years of experience in data extraction and creating data pipeline workflows on Bigdata (Hive, HQL/PySpark) with knowledge of Data Engineering concepts.Experience in analyzing large data sets from multiple data sources, perform validation of data.Knowledge of Hadoop eco-system components like HDFS, Spark, Hive, Sqoop.Experience writing codes in Python.Knowledge of SQL/HQL to write optimized queries.Hands on with GCP Cloud Services such as Big Query, Airflow DAG, Dataflow, Beam etc.</code> |
| <code>Data analysis for legal documents, meticulous data entry, active Top-Secret security clearance</code> | <code>Requirements NOTE: Applicants with an Active TS Clearance preferred Requirements * High School diploma or GED, Undergraduate degree preferred Ability to grasp and understand the organization and functions of the customer Meticulous data entry skills Excellent communication skills; oral and written Competence to review, interpret, and evaluate complex legal and non-legal documents Attention to detail and the ability to read and follow directions is extremely important Strong organizational and prioritization skills Experience with the Microsoft Office suite of applications (Excel, PowerPoint, Word) and other common software applications, to include databases, intermediate skills preferred Proven commitment and competence to provide excellent customer service; positive and flexible Ability to work in a team environment and maintain a professional dispositionThis position requires U.S. Citizenship and a 7 (or 10) year minimum background investigation ** NOTE: The 20% pay differential is d...</code> |
| <code>Trust & Safety, Generative AI, Recommender Systems</code> | <code>experiences achieve more in their careers. Our vision is to create economic opportunity for every member of the global workforce. Every day our members use our products to make connections, discover opportunities, build skills and gain insights. We believe amazing things happen when we work together in an environment where everyone feels a true sense of belonging, and that what matters most in a candidate is having the skills needed to succeed. It inspires us to invest in our talent and support career growth. Join us to challenge yourself with work that matters.<br><br>Location: <br><br>At LinkedIn, we trust each other to do our best work where it works best for us and our teams. This role offers a hybrid work option, meaning you can work from home and commute to a LinkedIn office, depending on what’s best for you and when it is important for your team to be together. <br><br>This role is based in Sunnyvale, CA. <br><br><br>Team Information:<br><br><br>The mission of the Anti-Abuse AI team is to build trust in every inte...</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 2
- `per_device_eval_batch_size`: 2
- `num_train_epochs`: 2
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 2
- `per_device_eval_batch_size`: 2
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss | ai-job-validation_cosine_accuracy | ai-job-test_cosine_accuracy |
|:------:|:----:|:-------------:|:---------------------------------:|:---------------------------:|
| -1 | -1 | - | 0.8812 | - |
| 1.0 | 405 | - | 0.9901 | - |
| 1.2346 | 500 | 0.07 | - | - |
| 2.0 | 810 | - | 0.9901 | - |
| -1 | -1 | - | 0.9901 | 1.0 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.4.1
- Transformers: 4.48.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
sophie-rain-spiderman-leakss/VIRAL-Sophie-Rain-Spiderman-Video-Tutorial
|
sophie-rain-spiderman-leakss
| 2025-02-16T15:41:20Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-02-16T15:38:40Z |
<p><a href="https://social.danielwellington.com/srain" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p>
<p><a href="https://social.danielwellington.com/srain" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p>
<p><a href="https://social.danielwellington.com/srain" rel="nofollow"><img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif"></a></p>
|
sophie-rain-spiderman-leakss/LiVE-Sophie-Rain-Spiderman-Video-Tutorial
|
sophie-rain-spiderman-leakss
| 2025-02-16T15:41:16Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-02-16T15:37:41Z |
<p><a href="https://social.danielwellington.com/srain" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p>
<p><a href="https://social.danielwellington.com/srain" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p>
<p><a href="https://social.danielwellington.com/srain" rel="nofollow"><img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif"></a></p>
|
Sophie-Rain-Spiderman-Viral-clips/Viral.Clips.Sophie.Rain.Spiderman.Video
|
Sophie-Rain-Spiderman-Viral-clips
| 2025-02-16T15:39:31Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-02-16T15:39:23Z |
<p><a href="https://link.rmg.co.uk/nude?spider" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p>
<p><a href="https://link.rmg.co.uk/nude?spider" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p>
<p><a href="https://link.rmg.co.uk/nude?spider" rel="nofollow"><img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif"></a></p>
|
jiinking/5_layer_GQA4_llama3B_model
|
jiinking
| 2025-02-16T15:39:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-16T14:05:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Sophie-Rain-Spiderman-Viral-clips/NEW.VIRAL.Sophie.Rain.SpiderMan.Original.Video.On.Twitter-X
|
Sophie-Rain-Spiderman-Viral-clips
| 2025-02-16T15:38:00Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-02-16T15:37:47Z |
<p><a href="https://link.rmg.co.uk/nude?spider" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p>
<p><a href="https://link.rmg.co.uk/nude?spider" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p>
<p><a href="https://link.rmg.co.uk/nude?spider" rel="nofollow"><img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif"></a></p>
|
mradermacher/Zero-Mistral-Small-24B-Instruct-2501-GGUF
|
mradermacher
| 2025-02-16T15:37:30Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"chat",
"conversational",
"en",
"ru",
"dataset:Vikhrmodels/GrandMaster-PRO-MAX",
"base_model:ZeroAgency/Zero-Mistral-Small-24B-Instruct-2501",
"base_model:quantized:ZeroAgency/Zero-Mistral-Small-24B-Instruct-2501",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-02-16T14:41:29Z |
---
base_model: ZeroAgency/Zero-Mistral-Small-24B-Instruct-2501
datasets:
- Vikhrmodels/GrandMaster-PRO-MAX
language:
- en
- ru
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- mistral
- chat
- conversational
- transformers
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ZeroAgency/Zero-Mistral-Small-24B-Instruct-2501
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Zero-Mistral-Small-24B-Instruct-2501-GGUF/resolve/main/Zero-Mistral-Small-24B-Instruct-2501.Q2_K.gguf) | Q2_K | 9.0 | |
| [GGUF](https://huggingface.co/mradermacher/Zero-Mistral-Small-24B-Instruct-2501-GGUF/resolve/main/Zero-Mistral-Small-24B-Instruct-2501.Q3_K_S.gguf) | Q3_K_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/Zero-Mistral-Small-24B-Instruct-2501-GGUF/resolve/main/Zero-Mistral-Small-24B-Instruct-2501.Q3_K_M.gguf) | Q3_K_M | 11.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Zero-Mistral-Small-24B-Instruct-2501-GGUF/resolve/main/Zero-Mistral-Small-24B-Instruct-2501.Q3_K_L.gguf) | Q3_K_L | 12.5 | |
| [GGUF](https://huggingface.co/mradermacher/Zero-Mistral-Small-24B-Instruct-2501-GGUF/resolve/main/Zero-Mistral-Small-24B-Instruct-2501.IQ4_XS.gguf) | IQ4_XS | 13.0 | |
| [GGUF](https://huggingface.co/mradermacher/Zero-Mistral-Small-24B-Instruct-2501-GGUF/resolve/main/Zero-Mistral-Small-24B-Instruct-2501.Q4_K_S.gguf) | Q4_K_S | 13.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Zero-Mistral-Small-24B-Instruct-2501-GGUF/resolve/main/Zero-Mistral-Small-24B-Instruct-2501.Q4_K_M.gguf) | Q4_K_M | 14.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Zero-Mistral-Small-24B-Instruct-2501-GGUF/resolve/main/Zero-Mistral-Small-24B-Instruct-2501.Q5_K_S.gguf) | Q5_K_S | 16.4 | |
| [GGUF](https://huggingface.co/mradermacher/Zero-Mistral-Small-24B-Instruct-2501-GGUF/resolve/main/Zero-Mistral-Small-24B-Instruct-2501.Q5_K_M.gguf) | Q5_K_M | 16.9 | |
| [GGUF](https://huggingface.co/mradermacher/Zero-Mistral-Small-24B-Instruct-2501-GGUF/resolve/main/Zero-Mistral-Small-24B-Instruct-2501.Q6_K.gguf) | Q6_K | 19.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Zero-Mistral-Small-24B-Instruct-2501-GGUF/resolve/main/Zero-Mistral-Small-24B-Instruct-2501.Q8_0.gguf) | Q8_0 | 25.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
siya72005/hindi_finetuned_model
|
siya72005
| 2025-02-16T15:35:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-02-16T15:35:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
thiagomg95/model
|
thiagomg95
| 2025-02-16T15:34:23Z | 1 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-14T21:36:46Z |
---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thiagomg95
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Justintvizle/kasimpasa-fenerbahce-maci-ne-zaman-saat-kacta-ve-hangi-kanalda-1739717728
|
Justintvizle
| 2025-02-16T15:32:27Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-02-16T15:29:05Z |
# Fenerbahçe-Kasımpaşa maçı CANLI YAYIN
# 👉[Trabzonspor Beşiktaş Maçı Canlı İzle](https://prostreamstv.com/turkish-super-lig/?twr)
<center>
<br>
<a href="https://prostreamstv.com/turkish-super-lig/?twr" title="Canlı Maç Giriş">
<img src="https://i.ibb.co/5K7Ks6w/zzzz3.gif" alt="Canlı Maç İzle" style="max-width:100%; border:2px solid #ddd; border-radius:10px;">
</a>
</center>
# 👉[Beşiktaş - Trabzonspor Maçı Canlı İzle](https://prostreamstv.com/turkish-super-lig/?twr)
# Fenerbahçe-Kasımpaşa CANLI İZLE | Fenerbahçe-Kasımpaşa maçı ne zaman, saat kaçta ve hangi kanalda?
Trendyol Süper Lig'in 24. haftasında Fenerbahçe ile Kasımpaşa karşı karşıya gelecek. Hafta içi UEFA Avrupa Ligi play-off turunda Anderlecht'i 3-0 mağlup eden Kanarya, ligde de etkili performansını sürdürmek istiyor. Teknik Direktör Jose Mourinho yönetiminde karşılaşmaya hazırlanan sarı-lacivertliler, taraftarı önünde 3 puanın sahibi olmayı hedefliyor. Galatasaray ile oynanacak derbi öncesi hata yapmak istemeyen Fenerbahçe, RAMS Park'a 7'de 7 yaparak gitmeyi amaçlıyor. Burak Yılmaz ile çıkış arayan Kasımpaşa ise güçlü rakibi karşısında puan ya da puanlar almanın yollarını arayacak. Yaklaşık 3.5 yıl sonra Fenerbahçe maçı yönetecek Ali Şansalan'ın performansı ise futbolseverler tarafından merak ediliyor. Mücadelede ilk 11'ler de belli oldu. Peki, Fenerbahçe-Kasımpaşa maçı ne zaman, saat kaçta ve hangi kanalda? İşte detaylar...
|
Justintvizle/fenerbahce-kasimpasa-maci-ne-zaman-saat-kacta-ve-hangi-kanalda-1739717728
|
Justintvizle
| 2025-02-16T15:32:21Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-02-16T15:29:07Z |
# Fenerbahçe-Kasımpaşa maçı CANLI YAYIN
# 👉[Trabzonspor Beşiktaş Maçı Canlı İzle](https://prostreamstv.com/turkish-super-lig/?twr)
<center>
<br>
<a href="https://prostreamstv.com/turkish-super-lig/?twr" title="Canlı Maç Giriş">
<img src="https://i.ibb.co/5K7Ks6w/zzzz3.gif" alt="Canlı Maç İzle" style="max-width:100%; border:2px solid #ddd; border-radius:10px;">
</a>
</center>
# 👉[Beşiktaş - Trabzonspor Maçı Canlı İzle](https://prostreamstv.com/turkish-super-lig/?twr)
# Fenerbahçe-Kasımpaşa CANLI İZLE | Fenerbahçe-Kasımpaşa maçı ne zaman, saat kaçta ve hangi kanalda?
Trendyol Süper Lig'in 24. haftasında Fenerbahçe ile Kasımpaşa karşı karşıya gelecek. Hafta içi UEFA Avrupa Ligi play-off turunda Anderlecht'i 3-0 mağlup eden Kanarya, ligde de etkili performansını sürdürmek istiyor. Teknik Direktör Jose Mourinho yönetiminde karşılaşmaya hazırlanan sarı-lacivertliler, taraftarı önünde 3 puanın sahibi olmayı hedefliyor. Galatasaray ile oynanacak derbi öncesi hata yapmak istemeyen Fenerbahçe, RAMS Park'a 7'de 7 yaparak gitmeyi amaçlıyor. Burak Yılmaz ile çıkış arayan Kasımpaşa ise güçlü rakibi karşısında puan ya da puanlar almanın yollarını arayacak. Yaklaşık 3.5 yıl sonra Fenerbahçe maçı yönetecek Ali Şansalan'ın performansı ise futbolseverler tarafından merak ediliyor. Mücadelede ilk 11'ler de belli oldu. Peki, Fenerbahçe-Kasımpaşa maçı ne zaman, saat kaçta ve hangi kanalda? İşte detaylar...
|
TheMelonGod/AceInstruct-7B-exl2
|
TheMelonGod
| 2025-02-16T15:32:21Z | 33 | 0 | null |
[
"quantized",
"safetensors",
"exllamav2",
"qwen2",
"nvidia",
"text-generation",
"en",
"base_model:nvidia/AceInstruct-7B",
"base_model:quantized:nvidia/AceInstruct-7B",
"license:cc-by-nc-4.0",
"region:us"
] |
text-generation
| 2025-02-13T12:09:43Z |
---
license: cc-by-nc-4.0
language:
- en
quantized_by: TheMelonGod
pipeline_tag: text-generation
tags:
- quantized
- safetensors
- exllamav2
- qwen2
- nvidia
base_model:
- nvidia/AceInstruct-7B
base_model_relation: quantized
---
**Orignal Model by:** [NVIDIA](https://huggingface.co/nvidia)
**Orignal Model:** [AceInstruct-7B](https://huggingface.co/nvidia/AceInstruct-7B)
For more information about the model, I highly recommend checking out the original model page and the creator while you're at it.
**ExLlamaV2 Quantizations:**
**8.0bpw**: [8hb](https://huggingface.co/TheMelonGod/AceInstruct-7B-exl2/tree/8hb-8.0bpw) | [6hb](https://huggingface.co/TheMelonGod/AceInstruct-7B-exl2/tree/6hb-8.0bpw)
**7.5bpw**: [8hb](https://huggingface.co/TheMelonGod/AceInstruct-7B-exl2/tree/8hb-7.5bpw) | [6hb](https://huggingface.co/TheMelonGod/AceInstruct-7B-exl2/tree/6hb-7.5bpw)
**7.0bpw**: [8hb](https://huggingface.co/TheMelonGod/AceInstruct-7B-exl2/tree/8hb-7.0bpw) | [6hb](https://huggingface.co/TheMelonGod/AceInstruct-7B-exl2/tree/6hb-7.0bpw)
**6.5bpw**: [8hb](https://huggingface.co/TheMelonGod/AceInstruct-7B-exl2/tree/8hb-6.5bpw) | [6hb](https://huggingface.co/TheMelonGod/AceInstruct-7B-exl2/tree/6hb-6.5bpw)
**6.0bpw**: [8hb](https://huggingface.co/TheMelonGod/AceInstruct-7B-exl2/tree/8hb-6.0bpw) | [6hb](https://huggingface.co/TheMelonGod/AceInstruct-7B-exl2/tree/6hb-6.0bpw)
**5.5bpw**: [8hb](https://huggingface.co/TheMelonGod/AceInstruct-7B-exl2/tree/8hb-5.5bpw) | [6hb](https://huggingface.co/TheMelonGod/AceInstruct-7B-exl2/tree/6hb-5.5bpw)
**5.0bpw**: [8hb](https://huggingface.co/TheMelonGod/AceInstruct-7B-exl2/tree/8hb-5.0bpw) | [6hb](https://huggingface.co/TheMelonGod/AceInstruct-7B-exl2/tree/6hb-5.0bpw)
**4.5bpw**: [8hb](https://huggingface.co/TheMelonGod/AceInstruct-7B-exl2/tree/8hb-4.5bpw) | [6hb](https://huggingface.co/TheMelonGod/AceInstruct-7B-exl2/tree/6hb-4.5bpw)
**4.25bpw**: [8hb](https://huggingface.co/TheMelonGod/AceInstruct-7B-exl2/tree/8hb-4.25bpw) | [6hb](https://huggingface.co/TheMelonGod/AceInstruct-7B-exl2/tree/6hb-4.25bpw)
**4.0bpw**: [8hb](https://huggingface.co/TheMelonGod/AceInstruct-7B-exl2/tree/8hb-4.0bpw) | [6hb](https://huggingface.co/TheMelonGod/AceInstruct-7B-exl2/tree/6hb-4.0bpw)
**3.75bpw**: [8hb](https://huggingface.co/TheMelonGod/AceInstruct-7B-exl2/tree/8hb-3.75bpw) | [6hb](https://huggingface.co/TheMelonGod/AceInstruct-7B-exl2/tree/6hb-3.75bpw)
**3.5bpw**: [8hb](https://huggingface.co/TheMelonGod/AceInstruct-7B-exl2/tree/8hb-3.5bpw) | [6hb](https://huggingface.co/TheMelonGod/AceInstruct-7B-exl2/tree/6hb-3.5bpw)
**3.0bpw**: [8hb](https://huggingface.co/TheMelonGod/AceInstruct-7B-exl2/tree/8hb-3.0bpw) | [6hb](https://huggingface.co/TheMelonGod/AceInstruct-7B-exl2/tree/6hb-3.0bpw)
**2.75bpw**: [8hb](https://huggingface.co/TheMelonGod/AceInstruct-7B-exl2/tree/8hb-2.75bpw) | [6hb](https://huggingface.co/TheMelonGod/AceInstruct-7B-exl2/tree/6hb-2.75bpw)
**2.5bpw**: [8hb](https://huggingface.co/TheMelonGod/AceInstruct-7B-exl2/tree/8hb-2.5bpw) | [6hb](https://huggingface.co/TheMelonGod/AceInstruct-7B-exl2/tree/6hb-2.5bpw)
**2.25bpw**: [8hb](https://huggingface.co/TheMelonGod/AceInstruct-7B-exl2/tree/8hb-2.25bpw) | [6hb](https://huggingface.co/TheMelonGod/AceInstruct-7B-exl2/tree/6hb-2.25bpw)
**2.0bpw**: [8hb](https://huggingface.co/TheMelonGod/AceInstruct-7B-exl2/tree/8hb-2.0bpw) | [6hb](https://huggingface.co/TheMelonGod/AceInstruct-7B-exl2/tree/6hb-2.0bpw)
[Measurement File](https://huggingface.co/TheMelonGod/AceInstruct-7B-exl2/blob/main/AceInstruct-7B-measurement.json) _(Default/built-in calibration dataset was used)_
If you need a specific model quantized or particular bits per weight, please let me know. I’m happy to help.
Your feedback and suggestions are always welcome! They help me improve and make quantizations better for everyone.
Special thanks to [turboderp](https://huggingface.co/turboderp) for developing the tools that made these quantizations possible. Your contributions are greatly appreciated!
|
Justintvizle/fenerbahce-kasimpasa-maci-canli-yayin-p13uu970
|
Justintvizle
| 2025-02-16T15:32:12Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-02-16T15:28:17Z |
# Fenerbahçe-Kasımpaşa maçı CANLI YAYIN
# 👉[Trabzonspor Beşiktaş Maçı Canlı İzle](https://prostreamstv.com/turkish-super-lig/?twr)
<center>
<br>
<a href="https://prostreamstv.com/turkish-super-lig/?twr" title="Canlı Maç Giriş">
<img src="https://i.ibb.co/5K7Ks6w/zzzz3.gif" alt="Canlı Maç İzle" style="max-width:100%; border:2px solid #ddd; border-radius:10px;">
</a>
</center>
# 👉[Beşiktaş - Trabzonspor Maçı Canlı İzle](https://prostreamstv.com/turkish-super-lig/?twr)
# Canlı | Fenerbahçe Kasımpaşa maçı canlı yayın
Fenerbahçe Kasımpaşa maçı canlı olarak Sözcü Spor'da... UEFA Avrupa Ligi'nde Anderlecht'i 3-0 deviren Fenerbahçe, Kasımpaşa'yı konuk ediyor. Ligde son 6 maçını kazanan sarı lacivertliler, seriyi 7 müsabakaya çıkarmayı hedefliyor.
|
Justintvizle/bahce-kasimpasa-maci-sifresiz-yayin-super-lig-maci-canli-yayin-2573178
|
Justintvizle
| 2025-02-16T15:32:06Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-02-16T15:27:40Z |
# Fenerbahçe-Kasımpaşa maçı CANLI YAYIN
# 👉[Trabzonspor Beşiktaş Maçı Canlı İzle](https://prostreamstv.com/turkish-super-lig/?twr)
<center>
<br>
<a href="https://prostreamstv.com/turkish-super-lig/?twr" title="Canlı Maç Giriş">
<img src="https://i.ibb.co/5K7Ks6w/zzzz3.gif" alt="Canlı Maç İzle" style="max-width:100%; border:2px solid #ddd; border-radius:10px;">
</a>
</center>
# 👉[Beşiktaş - Trabzonspor Maçı Canlı İzle](https://prostreamstv.com/turkish-super-lig/?twr)
# Fenerbahçe Kasımpaşa maçı canlı izle | F.Bahçe Kasımpaşa maçı şifresiz yayın (Süper Lig maçı canlı yayın)
|
Haneela/my_awesome_model
|
Haneela
| 2025-02-16T15:31:16Z | 0 | 0 |
transformers
|
[
"transformers",
"tf",
"safetensors",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-02-15T10:49:52Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: Haneela/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Haneela/my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1386
- Validation Loss: 0.2052
- Train Accuracy: 0.9264
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7810, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.2584 | 0.1969 | 0.9245 | 0 |
| 0.1386 | 0.2052 | 0.9264 | 1 |
### Framework versions
- Transformers 4.47.0
- TensorFlow 2.17.1
- Datasets 3.2.0
- Tokenizers 0.21.0
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.