modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
bistecglobal/resume-ranking-model
|
bistecglobal
| 2024-10-18T09:39:54Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-10-16T06:26:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
skbose/llama3-8b-instruct-finetuned-5-epochs-4bit-nf4
|
skbose
| 2024-10-18T09:38:53Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-18T09:35:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
alpcansoydas/product-model-18.10.24-ifhavemorethan100sampleperfamily-0.60acc
|
alpcansoydas
| 2024-10-18T09:37:20Z | 11 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:24341",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-10-18T09:36:47Z |
---
base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:24341
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: SEPERASYON 3X2
sentences:
- Components for information technology or broadcasting or telecommunications
- Electronic hardware and component parts and accessories
- Data Voice or Multimedia Network Equipment or Platforms and Accessories
- source_sentence: Cisco 2PK ASR 9K Route Switch Processor Service Edge 880G/slot
sentences:
- Data Voice or Multimedia Network Equipment or Platforms and Accessories
- Data Voice or Multimedia Network Equipment or Platforms and Accessories
- Communications Devices and Accessories
- source_sentence: CORE NETWORK DONANIM HIZMET 2G3G_HUAWEI
sentences:
- Components for information technology or broadcasting or telecommunications
- Communications Devices and Accessories
- Components for information technology or broadcasting or telecommunications
- source_sentence: 2,4m GD Satcom Anten Feed Support Kit
sentences:
- Communications Devices and Accessories
- Components for information technology or broadcasting or telecommunications
- Components for information technology or broadcasting or telecommunications
- source_sentence: TARGUS 13.3 ATMOSPHERE LAPTOP CASE (TNT009EU)
sentences:
- Office machines and their supplies and accessories
- Components for information technology or broadcasting or telecommunications
- Electrical equipment and components and supplies
model-index:
- name: SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: Unknown
type: unknown
metrics:
- type: pearson_cosine
value: .nan
name: Pearson Cosine
- type: spearman_cosine
value: .nan
name: Spearman Cosine
- type: pearson_manhattan
value: .nan
name: Pearson Manhattan
- type: spearman_manhattan
value: .nan
name: Spearman Manhattan
- type: pearson_euclidean
value: .nan
name: Pearson Euclidean
- type: spearman_euclidean
value: .nan
name: Spearman Euclidean
- type: pearson_dot
value: .nan
name: Pearson Dot
- type: spearman_dot
value: .nan
name: Spearman Dot
- type: pearson_max
value: .nan
name: Pearson Max
- type: spearman_max
value: .nan
name: Spearman Max
---
# SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) <!-- at revision ae06c001a2546bef168b9bf8f570ccb1a16aaa27 -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("alpcansoydas/product-model-18.10.24-ifhavemorethan100sampleperfamily-0.60acc")
# Run inference
sentences = [
'TARGUS 13.3 ATMOSPHERE LAPTOP CASE (TNT009EU)',
'Office machines and their supplies and accessories',
'Electrical equipment and components and supplies',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:-------------------|:--------|
| pearson_cosine | nan |
| spearman_cosine | nan |
| pearson_manhattan | nan |
| spearman_manhattan | nan |
| pearson_euclidean | nan |
| spearman_euclidean | nan |
| pearson_dot | nan |
| spearman_dot | nan |
| pearson_max | nan |
| **spearman_max** | **nan** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 24,341 training samples
* Columns: <code>sentence1</code> and <code>sentence2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 17.37 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 11.51 tokens</li><li>max: 16 tokens</li></ul> |
* Samples:
| sentence1 | sentence2 |
|:-----------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------|
| <code>CISCO.1000BASE-T SFP (NEBS 3 ESD)</code> | <code>Components for information technology or broadcasting or telecommunications</code> |
| <code>MINI-LINK 6365 15/A11H</code> | <code>Components for information technology or broadcasting or telecommunications</code> |
| <code>Aruba AP-367 (RW) 802.11n/ac Dual 2x2 2 Radio Integrated Directional Antenna Outdoor AP</code> | <code>Data Voice or Multimedia Network Equipment or Platforms and Accessories</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 3,043 evaluation samples
* Columns: <code>sentence1</code> and <code>sentence2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 17.51 tokens</li><li>max: 76 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 11.69 tokens</li><li>max: 16 tokens</li></ul> |
* Samples:
| sentence1 | sentence2 |
|:-------------------------------------|:-----------------------------------------------------------------------------------------|
| <code>Multicast Analyzer Card</code> | <code>Components for information technology or broadcasting or telecommunications</code> |
| <code>12m 130x5 MONOPOL Kule</code> | <code>Structural components and basic shapes</code> |
| <code>ANT3 A 0.6 23 HPX</code> | <code>Components for information technology or broadcasting or telecommunications</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `warmup_ratio`: 0.1
- `fp16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | spearman_max |
|:------:|:----:|:-------------:|:---------------:|:------------:|
| 0.1314 | 100 | 3.202 | 2.6937 | nan |
| 0.2628 | 200 | 2.675 | 2.5088 | nan |
| 0.3942 | 300 | 2.5367 | 2.4273 | nan |
| 0.5256 | 400 | 2.4877 | 2.3843 | nan |
| 0.6570 | 500 | 2.4297 | 2.3481 | nan |
| 0.7884 | 600 | 2.3945 | 2.3065 | nan |
| 0.9198 | 700 | 2.343 | 2.2810 | nan |
| 1.0512 | 800 | 2.2264 | 2.2955 | nan |
| 1.1827 | 900 | 2.2133 | 2.2620 | nan |
| 1.3141 | 1000 | 2.2009 | 2.2376 | nan |
| 1.4455 | 1100 | 2.2104 | 2.2506 | nan |
| 1.5769 | 1200 | 2.1665 | 2.2462 | nan |
| 1.7083 | 1300 | 2.1891 | 2.2210 | nan |
| 1.8397 | 1400 | 2.1694 | 2.2007 | nan |
| 1.9711 | 1500 | 2.15 | 2.2014 | nan |
| 2.1025 | 1600 | 2.0314 | 2.2281 | nan |
| 2.2339 | 1700 | 2.0491 | 2.2212 | nan |
| 2.3653 | 1800 | 2.015 | 2.2237 | nan |
| 2.4967 | 1900 | 2.0278 | 2.2185 | nan |
| 2.6281 | 2000 | 2.0163 | 2.2122 | nan |
| 2.7595 | 2100 | 1.9732 | 2.2137 | nan |
| 2.8909 | 2200 | 2.0244 | 2.2096 | nan |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.2.0
- Transformers: 4.44.2
- PyTorch: 2.4.1+cu121
- Accelerate: 0.34.2
- Datasets: 3.0.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
AZIIIIIIIIZ/distilbert-base-uncased_finetuned_code_text_classifier
|
AZIIIIIIIIZ
| 2024-10-18T09:33:31Z | 121 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-18T09:04:25Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased_finetuned_code_text_classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_finetuned_code_text_classifier
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1568
- Accuracy: 1.0
- F1: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|
| 0.5316 | 1.0 | 13 | 0.2807 | 1.0 | 1.0 |
| 0.2471 | 2.0 | 26 | 0.1568 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
debapriyo/llama3.18B-Fine-tunedByDebapriyo
|
debapriyo
| 2024-10-18T09:28:38Z | 77 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-10-18T06:40:45Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
arcee-ai/Meraj-Mini
|
arcee-ai
| 2024-10-18T09:27:17Z | 48 | 13 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"qwen",
"text-generation-inference",
"text2text-generation",
"ar",
"en",
"arxiv:2305.18290",
"arxiv:2403.13257",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-10-06T09:27:48Z |
---
license: apache-2.0
language:
- ar
- en
base_model:
- Qwen/Qwen2.5-7B-Instruct
pipeline_tag: text2text-generation
library_name: transformers
tags:
- qwen
- text-generation-inference
---
<div align="center">
<img src="https://i.ibb.co/CmPSSpq/Screenshot-2024-10-06-at-9-45-06-PM.png" alt="Arcee Meraj Mini" style="border-radius: 10px; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19); max-width: 100%; height: auto;">
</div>
Following the release of [Arcee Meraj](https://meraj.arcee.ai/), our enterprise's globally top-performing Arabic LLM, we are thrilled to unveil Arcee Meraj Mini. This open-source model, meticulously fine-tuned from [Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct), is expertly designed for both Arabic and English. This model has undergone rigorous evaluation across multiple benchmarks in both languages, demonstrating top-tier performance in Arabic and competitive results in English. Arcee Meraj Mini’s primary objective is to enhance Arabic capabilities while maintaining robust English language proficiency. Benchmark results confirm that Arcee Meraj Mini excels in Arabic, with English performance comparable to leading models — perfectly aligning with our vision for balanced bilingual strength.
## Technical Details
Below is an overview of the key stages in Meraj Mini’s development:
1. **Data Preparation:** We filter candidate samples from diverse English and Arabic sources to ensure high-quality data. Some of the selected English datasets are translated into Arabic to increase the quantity of Arabic samples and improve the model’s quality in bilingual performance. Then, new [Direct Preference Optimization (DPO)](https://arxiv.org/pdf/2305.18290) datasets are continuously prepared, filtered, and translated to maintain a fresh and diverse dataset that supports better generalization across domains.
2. **Initial Training:** We train the Qwen2.5 model with 7 billion parameters using these high-quality datasets in both languages. This allows the model to handle diverse linguistic patterns from over 500 million tokens, ensuring strong performance in Arabic and English tasks.
3. **Iterative Training and Post-Training:** Iterative training and post-training iterations refine the model, enhancing its accuracy and adaptability to ensure it can perform well across varied tasks and language contexts.
4. **Evaluation:** Arcee Meraj Mini is based on training and evaluating 15 different variants to explore optimal configurations, with assessments done on both Arabic and English benchmarks and leaderboards. This step ensures the model is robust in handling both general and domain-specific tasks.
5. **Final Model Creation:** We select the best-performing variant and use the [MergeKit](https://arxiv.org/pdf/2403.13257) library to merge the configurations, resulting in the final Arcee Meraj Mini model. This model is not only optimized for language understanding but also serves as a starting point for domain adaptation in different areas.
With this process, Arcee Meraj Mini is crafted to be more than just a general-purpose language model—it’s an adaptable tool, ready to be fine-tuned for specific industries and applications, empowering users to extend its capabilities for domain-specific tasks.
## Capabilities and Use Cases
Arcee Meraj Mini is capable of solving a wide range of language tasks, including the tasks as below:
1. **Arabic Language Understanding**: Arcee Meraj Mini excels in general language comprehension, reading comprehension, and common-sense reasoning, all tailored to the Arabic language, providing strong performance in a variety of linguistic tasks.
2. **Cultural Adaptation**: The model ensures content creation that goes beyond linguistic accuracy, incorporating cultural nuances to align with Arabic norms and values, making it suitable for culturally relevant applications.
3. **Education**: It enables personalized, adaptive learning experiences for Arabic speakers by generating high-quality educational content across diverse subjects, enhancing the overall learning journey.
4. **Mathematics and Coding**: With robust support for mathematical reasoning and problem-solving, as well as code generation in Arabic, Arcee Meraj Mini serves as a valuable tool for developers and professionals in technical fields.
5. **Customer Service**: The model facilitates the development of advanced Arabic-speaking chatbots and virtual assistants, capable of managing customer queries with a high degree of natural language understanding and precision.
6. **Content Creation**: Arcee Meraj Mini generates high-quality Arabic content for various needs, from marketing materials and technical documentation to creative writing, ensuring impactful communication and engagement in the Arabic-speaking world.
## Quantized GGUF
Here are GGUF models:
- [Meraj-Mini-GGUF](https://huggingface.co/MaziyarPanahi/Meraj-Mini-GGUF)
## How to
This model uses ChatML prompt template:
```
<|im_start|>system
{System}
<|im_end|>
<|im_start|>user
{User}
<|im_end|>
<|im_start|>assistant
{Assistant}
```
```python
# Use a pipeline as a high-level helper
from transformers import pipeline
messages = [
{"role": "user", "content": "مرحبا، كيف حالك؟"},
]
pipe = pipeline("text-generation", model="arcee-ai/Meraj-Mini")
pipe(messages)
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("arcee-ai/Meraj-Mini")
model = AutoModelForCausalLM.from_pretrained("arcee-ai/Meraj-Mini")
```
## Evaluations
#### Open Arabic LLM Leaderboard (OALL) Benchmarks
Arcee Meraj Mini model consistently outperforms state-of-the-art models on most of the Open Arabic LLM Leaderboard (OALL) benchmarks, highlighting its improvements and effectiveness in Arabic language content, and securing the top performing position on average among the other models.
<div align="center">
<img src="https://i.ibb.co/LQ0z7fH/Screenshot-2024-10-15-at-2-53-45-PM.png" alt="Arcee Meraj Mini Open Arabic LLM Leaderboard (OALL) - table 1" style="border-radius: 10px; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19); max-width: 80%; height: auto;">
</div>
<div align="center">
<img src="https://i.ibb.co/fM6VQR7/Screenshot-2024-10-15-at-2-53-55-PM.png" alt="Arcee Meraj Mini Open Arabic LLM Leaderboard (OALL) - table 2" style="border-radius: 10px; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19); max-width: 80%; height: auto;">
</div>
#### Translated MMLU
We focused on the multilingual MMLU dataset, as distributed through the LM Evaluation Harness repository, to compare the multilingual strength of different models for this benchmark. Arcee Meraj Mini outperforms the other models, showcasing these models’ superior performance compared to the other state-of-the-art models.
<div align="center">
<img src="https://i.ibb.co/dfwW1W5/W-B-Chart-10-15-2024-2-07-12-PM.png" alt="Arcee Meraj Mini Trnalsated MMLU" style="border-radius: 10px; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19); max-width: 80%; height: auto;">
</div>
#### English Benchmarks:
Arcee Meraj Mini performs comparably to state-of-the-art models, demonstrating how the model retains its English language knowledge and capabilities while learning Arabic.
<div align="center">
<img src="https://i.ibb.co/mTcLFzt/W-B-Chart-10-15-2024-2-15-57-PM.png" alt="Arcee Meraj Mini Winogrande" style="border-radius: 10px; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19); max-width: 80%; height: auto;">
</div>
<div align="center">
<img src="https://i.ibb.co/GRBjjGN/W-B-Chart-10-15-2024-2-17-34-PM.png" alt="Arcee Meraj Mini Arc Challenge" style="border-radius: 10px; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19); max-width: 80%; height: auto;">
</div>
<div align="center">
<img src="https://i.ibb.co/98s0qTf/W-B-Chart-10-15-2024-2-17-46-PM.png" alt="Arcee Meraj Mini TruthfulQA" style="border-radius: 10px; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19); max-width: 80%; height: auto;">
</div>
<div align="center">
<img src="https://i.ibb.co/yqvRK3L/W-B-Chart-10-15-2024-2-17-57-PM.png" alt="Arcee Meraj Mini GSM8K" style="border-radius: 10px; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19); max-width: 80%; height: auto;">
</div>
## Model Usage
For a detailed explanation of the model's capabilities, architecture, and applications, please refer to our blog post: https://blog.arcee.ai/arcee-meraj-mini-2/
To test the model directly, you can try it out using this Google Colab notebook: https://colab.research.google.com/drive/1hXXyNM-X0eKwlZ5OwqhZfO0U8CBq8pFO?usp=sharing
## Acknowledgements
We are grateful to the open-source AI community for their continuous contributions and to the Qwen team for their foundational efforts on the Qwen2.5 model series.
## Future Directions
As we release the Arcee Meraj Mini to the public, we invite researchers, developers, and businesses to engage with the Arcee Meraj Mini model, particularly in enhancing support for the Arabic language and fostering domain adaptation. We are committed to advancing open-source AI technology and invite the community to explore, contribute, and build upon Arcee Meraj Mini.
|
KONIexp/20_80_inst_exp_Llama-3_1-8B-Instruct_20241018
|
KONIexp
| 2024-10-18T09:24:13Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-18T09:20:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
youssefkhalil320/all-MiniLM-L6-v2-triplet-loss
|
youssefkhalil320
| 2024-10-18T09:23:59Z | 9 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:1087179",
"loss:TripletLoss",
"en",
"arxiv:1908.10084",
"arxiv:1703.07737",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-10-18T09:20:06Z |
---
base_model: sentence-transformers/all-MiniLM-L6-v2
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy
- dot_accuracy
- manhattan_accuracy
- euclidean_accuracy
- max_accuracy
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:1087179
- loss:TripletLoss
widget:
- source_sentence: hyperactive impulsive adhd
sentences:
- Claw Clip
- egyptian postage
- mug
- source_sentence: Work of Madness Hoodie
sentences:
- t-shirt
- towel
- men hoodie
- source_sentence: E7Lam Hoodie
sentences:
- Al Mady Hoodie
- waterfall cup
- hoodie
- source_sentence: Tote bag
sentences:
- Waterfall Mug
- hoodie
- linen tote bag
- source_sentence: Kimono
sentences:
- mug
- fringe kaftan
- shoes
model-index:
- name: all-MiniLM-L6-v2-triplet-loss
results:
- task:
type: triplet
name: Triplet
dataset:
name: all nli dev
type: all-nli-dev
metrics:
- type: cosine_accuracy
value: 0.9168454165823506
name: Cosine Accuracy
- type: dot_accuracy
value: 0.08315458341764934
name: Dot Accuracy
- type: manhattan_accuracy
value: 0.9135451351202193
name: Manhattan Accuracy
- type: euclidean_accuracy
value: 0.9168454165823506
name: Euclidean Accuracy
- type: max_accuracy
value: 0.9168454165823506
name: Max Accuracy
---
# all-MiniLM-L6-v2-triplet-loss
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision ea78891063587eb050ed4166b20062eaf978037c -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Kimono',
'fringe kaftan',
'mug',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Dataset: `all-nli-dev`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:-------------------|:-----------|
| cosine_accuracy | 0.9168 |
| dot_accuracy | 0.0832 |
| manhattan_accuracy | 0.9135 |
| euclidean_accuracy | 0.9168 |
| **max_accuracy** | **0.9168** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `warmup_ratio`: 0.1
- `fp16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | loss | all-nli-dev_max_accuracy |
|:------:|:------:|:-------------:|:------:|:------------------------:|
| 0 | 0 | - | - | 0.9168 |
| 0.0029 | 100 | 4.7115 | - | - |
| 0.0059 | 200 | 4.6948 | - | - |
| 0.0088 | 300 | 4.6548 | - | - |
| 0.0118 | 400 | 4.6055 | - | - |
| 0.0147 | 500 | 4.5234 | 4.3878 | - |
| 0.0177 | 600 | 4.4338 | - | - |
| 0.0206 | 700 | 4.2938 | - | - |
| 0.0235 | 800 | 4.1176 | - | - |
| 0.0265 | 900 | 3.9373 | - | - |
| 0.0294 | 1000 | 3.7241 | 3.4721 | - |
| 0.0324 | 1100 | 3.5965 | - | - |
| 0.0353 | 1200 | 3.4949 | - | - |
| 0.0383 | 1300 | 3.4542 | - | - |
| 0.0412 | 1400 | 3.4345 | - | - |
| 0.0442 | 1500 | 3.3955 | 3.2453 | - |
| 0.0471 | 1600 | 3.3818 | - | - |
| 0.0500 | 1700 | 3.3608 | - | - |
| 0.0530 | 1800 | 3.3377 | - | - |
| 0.0559 | 1900 | 3.326 | - | - |
| 0.0589 | 2000 | 3.3061 | 3.1692 | - |
| 0.0618 | 2100 | 3.308 | - | - |
| 0.0648 | 2200 | 3.2887 | - | - |
| 0.0677 | 2300 | 3.2963 | - | - |
| 0.0706 | 2400 | 3.2744 | - | - |
| 0.0736 | 2500 | 3.2601 | 3.1416 | - |
| 0.0765 | 2600 | 3.271 | - | - |
| 0.0795 | 2700 | 3.2501 | - | - |
| 0.0824 | 2800 | 3.2536 | - | - |
| 0.0854 | 2900 | 3.2689 | - | - |
| 0.0883 | 3000 | 3.2362 | 3.1196 | - |
| 0.0912 | 3100 | 3.2281 | - | - |
| 0.0942 | 3200 | 3.2351 | - | - |
| 0.0971 | 3300 | 3.2173 | - | - |
| 0.1001 | 3400 | 3.2055 | - | - |
| 0.1030 | 3500 | 3.2198 | 3.1081 | - |
| 0.1060 | 3600 | 3.2116 | - | - |
| 0.1089 | 3700 | 3.2088 | - | - |
| 0.1118 | 3800 | 3.2043 | - | - |
| 0.1148 | 3900 | 3.1943 | - | - |
| 0.1177 | 4000 | 3.1897 | 3.1027 | - |
| 0.1207 | 4100 | 3.2131 | - | - |
| 0.1236 | 4200 | 3.198 | - | - |
| 0.1266 | 4300 | 3.1892 | - | - |
| 0.1295 | 4400 | 3.1753 | - | - |
| 0.1325 | 4500 | 3.1722 | 3.0840 | - |
| 0.1354 | 4600 | 3.1599 | - | - |
| 0.1383 | 4700 | 3.166 | - | - |
| 0.1413 | 4800 | 3.1585 | - | - |
| 0.1442 | 4900 | 3.1698 | - | - |
| 0.1472 | 5000 | 3.1766 | 3.0782 | - |
| 0.1501 | 5100 | 3.1515 | - | - |
| 0.1531 | 5200 | 3.1487 | - | - |
| 0.1560 | 5300 | 3.1579 | - | - |
| 0.1589 | 5400 | 3.1533 | - | - |
| 0.1619 | 5500 | 3.1433 | 3.0735 | - |
| 0.1648 | 5600 | 3.1454 | - | - |
| 0.1678 | 5700 | 3.1397 | - | - |
| 0.1707 | 5800 | 3.1422 | - | - |
| 0.1737 | 5900 | 3.1372 | - | - |
| 0.1766 | 6000 | 3.137 | 3.0710 | - |
| 0.1795 | 6100 | 3.1297 | - | - |
| 0.1825 | 6200 | 3.1202 | - | - |
| 0.1854 | 6300 | 3.1256 | - | - |
| 0.1884 | 6400 | 3.1185 | - | - |
| 0.1913 | 6500 | 3.1266 | 3.0667 | - |
| 0.1943 | 6600 | 3.1197 | - | - |
| 0.1972 | 6700 | 3.1286 | - | - |
| 0.2001 | 6800 | 3.1239 | - | - |
| 0.2031 | 6900 | 3.1166 | - | - |
| 0.2060 | 7000 | 3.1054 | 3.0664 | - |
| 0.2090 | 7100 | 3.1103 | - | - |
| 0.2119 | 7200 | 3.0929 | - | - |
| 0.2149 | 7300 | 3.1051 | - | - |
| 0.2178 | 7400 | 3.1023 | - | - |
| 0.2208 | 7500 | 3.0946 | 3.0636 | - |
| 0.2237 | 7600 | 3.0958 | - | - |
| 0.2266 | 7700 | 3.0907 | - | - |
| 0.2296 | 7800 | 3.1051 | - | - |
| 0.2325 | 7900 | 3.0965 | - | - |
| 0.2355 | 8000 | 3.0954 | 3.0617 | - |
| 0.2384 | 8100 | 3.0693 | - | - |
| 0.2414 | 8200 | 3.0906 | - | - |
| 0.2443 | 8300 | 3.0881 | - | - |
| 0.2472 | 8400 | 3.0867 | - | - |
| 0.2502 | 8500 | 3.0867 | 3.0610 | - |
| 0.2531 | 8600 | 3.0909 | - | - |
| 0.2561 | 8700 | 3.0877 | - | - |
| 0.2590 | 8800 | 3.0837 | - | - |
| 0.2620 | 8900 | 3.0865 | - | - |
| 0.2649 | 9000 | 3.0846 | 3.0607 | - |
| 0.2678 | 9100 | 3.0798 | - | - |
| 0.2708 | 9200 | 3.0928 | - | - |
| 0.2737 | 9300 | 3.0794 | - | - |
| 0.2767 | 9400 | 3.0797 | - | - |
| 0.2796 | 9500 | 3.0685 | 3.0623 | - |
| 0.2826 | 9600 | 3.0768 | - | - |
| 0.2855 | 9700 | 3.0657 | - | - |
| 0.2884 | 9800 | 3.0838 | - | - |
| 0.2914 | 9900 | 3.0775 | - | - |
| 0.2943 | 10000 | 3.0667 | 3.0587 | - |
| 0.2973 | 10100 | 3.088 | - | - |
| 0.3002 | 10200 | 3.0824 | - | - |
| 0.3032 | 10300 | 3.0754 | - | - |
| 0.3061 | 10400 | 3.064 | - | - |
| 0.3091 | 10500 | 3.0637 | 3.0578 | - |
| 0.3120 | 10600 | 3.0754 | - | - |
| 0.3149 | 10700 | 3.0703 | - | - |
| 0.3179 | 10800 | 3.0697 | - | - |
| 0.3208 | 10900 | 3.0635 | - | - |
| 0.3238 | 11000 | 3.0872 | 3.0573 | - |
| 0.3267 | 11100 | 3.0722 | - | - |
| 0.3297 | 11200 | 3.0633 | - | - |
| 0.3326 | 11300 | 3.058 | - | - |
| 0.3355 | 11400 | 3.0601 | - | - |
| 0.3385 | 11500 | 3.0732 | 3.0583 | - |
| 0.3414 | 11600 | 3.0565 | - | - |
| 0.3444 | 11700 | 3.0735 | - | - |
| 0.3473 | 11800 | 3.0656 | - | - |
| 0.3503 | 11900 | 3.0583 | - | - |
| 0.3532 | 12000 | 3.0714 | 3.0574 | - |
| 0.3561 | 12100 | 3.0647 | - | - |
| 0.3591 | 12200 | 3.0522 | - | - |
| 0.3620 | 12300 | 3.0668 | - | - |
| 0.3650 | 12400 | 3.071 | - | - |
| 0.3679 | 12500 | 3.0667 | 3.0556 | - |
| 0.3709 | 12600 | 3.0568 | - | - |
| 0.3738 | 12700 | 3.0642 | - | - |
| 0.3767 | 12800 | 3.0607 | - | - |
| 0.3797 | 12900 | 3.0679 | - | - |
| 0.3826 | 13000 | 3.0547 | 3.0547 | - |
| 0.3856 | 13100 | 3.0714 | - | - |
| 0.3885 | 13200 | 3.0692 | - | - |
| 0.3915 | 13300 | 3.0597 | - | - |
| 0.3944 | 13400 | 3.067 | - | - |
| 0.3974 | 13500 | 3.0626 | 3.0551 | - |
| 0.4003 | 13600 | 3.0708 | - | - |
| 0.4032 | 13700 | 3.065 | - | - |
| 0.4062 | 13800 | 3.0619 | - | - |
| 0.4091 | 13900 | 3.0556 | - | - |
| 0.4121 | 14000 | 3.0708 | 3.0524 | - |
| 0.4150 | 14100 | 3.0634 | - | - |
| 0.4180 | 14200 | 3.0605 | - | - |
| 0.4209 | 14300 | 3.0555 | - | - |
| 0.4238 | 14400 | 3.0624 | - | - |
| 0.4268 | 14500 | 3.0468 | 3.0510 | - |
| 0.4297 | 14600 | 3.0534 | - | - |
| 0.4327 | 14700 | 3.0671 | - | - |
| 0.4356 | 14800 | 3.0714 | - | - |
| 0.4386 | 14900 | 3.0493 | - | - |
| 0.4415 | 15000 | 3.0457 | 3.0467 | - |
| 0.4444 | 15100 | 3.0599 | - | - |
| 0.4474 | 15200 | 3.0554 | - | - |
| 0.4503 | 15300 | 3.0466 | - | - |
| 0.4533 | 15400 | 3.0471 | - | - |
| 0.4562 | 15500 | 3.0465 | 3.0500 | - |
| 0.4592 | 15600 | 3.0556 | - | - |
| 0.4621 | 15700 | 3.0444 | - | - |
| 0.4650 | 15800 | 3.0468 | - | - |
| 0.4680 | 15900 | 3.0554 | - | - |
| 0.4709 | 16000 | 3.0573 | 3.0469 | - |
| 0.4739 | 16100 | 3.049 | - | - |
| 0.4768 | 16200 | 3.0539 | - | - |
| 0.4798 | 16300 | 3.052 | - | - |
| 0.4827 | 16400 | 3.0538 | - | - |
| 0.4857 | 16500 | 3.045 | 3.0444 | - |
| 0.4886 | 16600 | 3.0381 | - | - |
| 0.4915 | 16700 | 3.0517 | - | - |
| 0.4945 | 16800 | 3.0598 | - | - |
| 0.4974 | 16900 | 3.046 | - | - |
| 0.5004 | 17000 | 3.0478 | 3.0447 | - |
| 0.5033 | 17100 | 3.054 | - | - |
| 0.5063 | 17200 | 3.0471 | - | - |
| 0.5092 | 17300 | 3.0383 | - | - |
| 0.5121 | 17400 | 3.0539 | - | - |
| 0.5151 | 17500 | 3.0457 | 3.0432 | - |
| 0.5180 | 17600 | 3.05 | - | - |
| 0.5210 | 17700 | 3.05 | - | - |
| 0.5239 | 17800 | 3.0512 | - | - |
| 0.5269 | 17900 | 3.0399 | - | - |
| 0.5298 | 18000 | 3.048 | 3.0431 | - |
| 0.5327 | 18100 | 3.0367 | - | - |
| 0.5357 | 18200 | 3.0442 | - | - |
| 0.5386 | 18300 | 3.0472 | - | - |
| 0.5416 | 18400 | 3.0335 | - | - |
| 0.5445 | 18500 | 3.0465 | 3.0459 | - |
| 0.5475 | 18600 | 3.054 | - | - |
| 0.5504 | 18700 | 3.0489 | - | - |
| 0.5533 | 18800 | 3.037 | - | - |
| 0.5563 | 18900 | 3.0432 | - | - |
| 0.5592 | 19000 | 3.0401 | 3.0426 | - |
| 0.5622 | 19100 | 3.0369 | - | - |
| 0.5651 | 19200 | 3.0561 | - | - |
| 0.5681 | 19300 | 3.0469 | - | - |
| 0.5710 | 19400 | 3.0468 | - | - |
| 0.5740 | 19500 | 3.0455 | 3.0433 | - |
| 0.5769 | 19600 | 3.0512 | - | - |
| 0.5798 | 19700 | 3.0474 | - | - |
| 0.5828 | 19800 | 3.043 | - | - |
| 0.5857 | 19900 | 3.0473 | - | - |
| 0.5887 | 20000 | 3.0448 | 3.0415 | - |
| 0.5916 | 20100 | 3.0441 | - | - |
| 0.5946 | 20200 | 3.0403 | - | - |
| 0.5975 | 20300 | 3.0516 | - | - |
| 0.6004 | 20400 | 3.0459 | - | - |
| 0.6034 | 20500 | 3.0415 | 3.0415 | - |
| 0.6063 | 20600 | 3.034 | - | - |
| 0.6093 | 20700 | 3.0483 | - | - |
| 0.6122 | 20800 | 3.0538 | - | - |
| 0.6152 | 20900 | 3.0458 | - | - |
| 0.6181 | 21000 | 3.0445 | 3.0372 | - |
| 0.6210 | 21100 | 3.0414 | - | - |
| 0.6240 | 21200 | 3.0476 | - | - |
| 0.6269 | 21300 | 3.0638 | - | - |
| 0.6299 | 21400 | 3.0375 | - | - |
| 0.6328 | 21500 | 3.0425 | 3.0397 | - |
| 0.6358 | 21600 | 3.0394 | - | - |
| 0.6387 | 21700 | 3.0443 | - | - |
| 0.6416 | 21800 | 3.0381 | - | - |
| 0.6446 | 21900 | 3.0387 | - | - |
| 0.6475 | 22000 | 3.0255 | 3.0381 | - |
| 0.6505 | 22100 | 3.0355 | - | - |
| 0.6534 | 22200 | 3.0411 | - | - |
| 0.6564 | 22300 | 3.0436 | - | - |
| 0.6593 | 22400 | 3.038 | - | - |
| 0.6623 | 22500 | 3.0336 | 3.0325 | - |
| 0.6652 | 22600 | 3.0404 | - | - |
| 0.6681 | 22700 | 3.0374 | - | - |
| 0.6711 | 22800 | 3.0342 | - | - |
| 0.6740 | 22900 | 3.0385 | - | - |
| 0.6770 | 23000 | 3.0329 | 3.0342 | - |
| 0.6799 | 23100 | 3.0391 | - | - |
| 0.6829 | 23200 | 3.0366 | - | - |
| 0.6858 | 23300 | 3.0284 | - | - |
| 0.6887 | 23400 | 3.0328 | - | - |
| 0.6917 | 23500 | 3.0322 | 3.0333 | - |
| 0.6946 | 23600 | 3.0353 | - | - |
| 0.6976 | 23700 | 3.0371 | - | - |
| 0.7005 | 23800 | 3.0321 | - | - |
| 0.7035 | 23900 | 3.0365 | - | - |
| 0.7064 | 24000 | 3.0302 | 3.0342 | - |
| 0.7093 | 24100 | 3.0352 | - | - |
| 0.7123 | 24200 | 3.0277 | - | - |
| 0.7152 | 24300 | 3.0402 | - | - |
| 0.7182 | 24400 | 3.0364 | - | - |
| 0.7211 | 24500 | 3.0439 | 3.0336 | - |
| 0.7241 | 24600 | 3.0396 | - | - |
| 0.7270 | 24700 | 3.0475 | - | - |
| 0.7299 | 24800 | 3.0258 | - | - |
| 0.7329 | 24900 | 3.0345 | - | - |
| 0.7358 | 25000 | 3.0326 | 3.0350 | - |
| 0.7388 | 25100 | 3.0357 | - | - |
| 0.7417 | 25200 | 3.0413 | - | - |
| 0.7447 | 25300 | 3.0326 | - | - |
| 0.7476 | 25400 | 3.0401 | - | - |
| 0.7506 | 25500 | 3.0313 | 3.0365 | - |
| 0.7535 | 25600 | 3.04 | - | - |
| 0.7564 | 25700 | 3.0382 | - | - |
| 0.7594 | 25800 | 3.0344 | - | - |
| 0.7623 | 25900 | 3.0325 | - | - |
| 0.7653 | 26000 | 3.0475 | 3.0340 | - |
| 0.7682 | 26100 | 3.0256 | - | - |
| 0.7712 | 26200 | 3.0331 | - | - |
| 0.7741 | 26300 | 3.0325 | - | - |
| 0.7770 | 26400 | 3.0431 | - | - |
| 0.7800 | 26500 | 3.04 | 3.0372 | - |
| 0.7829 | 26600 | 3.0393 | - | - |
| 0.7859 | 26700 | 3.0374 | - | - |
| 0.7888 | 26800 | 3.0406 | - | - |
| 0.7918 | 26900 | 3.0343 | - | - |
| 0.7947 | 27000 | 3.0374 | 3.0325 | - |
| 0.7976 | 27100 | 3.0262 | - | - |
| 0.8006 | 27200 | 3.0393 | - | - |
| 0.8035 | 27300 | 3.0255 | - | - |
| 0.8065 | 27400 | 3.0305 | - | - |
| 0.8094 | 27500 | 3.0324 | 3.0323 | - |
| 0.8124 | 27600 | 3.0317 | - | - |
| 0.8153 | 27700 | 3.0267 | - | - |
| 0.8182 | 27800 | 3.0299 | - | - |
| 0.8212 | 27900 | 3.0305 | - | - |
| 0.8241 | 28000 | 3.0336 | 3.0319 | - |
| 0.8271 | 28100 | 3.0373 | - | - |
| 0.8300 | 28200 | 3.0342 | - | - |
| 0.8330 | 28300 | 3.0436 | - | - |
| 0.8359 | 28400 | 3.0354 | - | - |
| 0.8389 | 28500 | 3.0373 | 3.0291 | - |
| 0.8418 | 28600 | 3.0292 | - | - |
| 0.8447 | 28700 | 3.0229 | - | - |
| 0.8477 | 28800 | 3.0348 | - | - |
| 0.8506 | 28900 | 3.041 | - | - |
| 0.8536 | 29000 | 3.031 | 3.0324 | - |
| 0.8565 | 29100 | 3.0354 | - | - |
| 0.8595 | 29200 | 3.0242 | - | - |
| 0.8624 | 29300 | 3.026 | - | - |
| 0.8653 | 29400 | 3.0373 | - | - |
| 0.8683 | 29500 | 3.0298 | 3.0276 | - |
| 0.8712 | 29600 | 3.0341 | - | - |
| 0.8742 | 29700 | 3.0304 | - | - |
| 0.8771 | 29800 | 3.0241 | - | - |
| 0.8801 | 29900 | 3.0304 | - | - |
| 0.8830 | 30000 | 3.0279 | 3.0278 | - |
| 0.8859 | 30100 | 3.026 | - | - |
| 0.8889 | 30200 | 3.0272 | - | - |
| 0.8918 | 30300 | 3.0372 | - | - |
| 0.8948 | 30400 | 3.0241 | - | - |
| 0.8977 | 30500 | 3.0347 | 3.0276 | - |
| 0.9007 | 30600 | 3.0335 | - | - |
| 0.9036 | 30700 | 3.0316 | - | - |
| 0.9065 | 30800 | 3.0372 | - | - |
| 0.9095 | 30900 | 3.0234 | - | - |
| 0.9124 | 31000 | 3.0303 | 3.0278 | - |
| 0.9154 | 31100 | 3.0466 | - | - |
| 0.9183 | 31200 | 3.0391 | - | - |
| 0.9213 | 31300 | 3.0334 | - | - |
| 0.9242 | 31400 | 3.029 | - | - |
| 0.9272 | 31500 | 3.0322 | 3.0280 | - |
| 0.9301 | 31600 | 3.0272 | - | - |
| 0.9330 | 31700 | 3.0315 | - | - |
| 0.9360 | 31800 | 3.0297 | - | - |
| 0.9389 | 31900 | 3.0228 | - | - |
| 0.9419 | 32000 | 3.0246 | 3.0272 | - |
| 0.9448 | 32100 | 3.0215 | - | - |
| 0.9478 | 32200 | 3.0246 | - | - |
| 0.9507 | 32300 | 3.0333 | - | - |
| 0.9536 | 32400 | 3.0334 | - | - |
| 0.9566 | 32500 | 3.029 | 3.0271 | - |
| 0.9595 | 32600 | 3.0328 | - | - |
| 0.9625 | 32700 | 3.0284 | - | - |
| 0.9654 | 32800 | 3.0327 | - | - |
| 0.9684 | 32900 | 3.0228 | - | - |
| 0.9713 | 33000 | 3.0321 | 3.0267 | - |
| 0.9742 | 33100 | 3.0277 | - | - |
| 0.9772 | 33200 | 3.0309 | - | - |
| 0.9801 | 33300 | 3.0265 | - | - |
| 0.9831 | 33400 | 3.029 | - | - |
| 0.9860 | 33500 | 3.0315 | 3.0257 | - |
| 0.9890 | 33600 | 3.0233 | - | - |
| 0.9919 | 33700 | 3.0208 | - | - |
| 0.9948 | 33800 | 3.0296 | - | - |
| 0.9978 | 33900 | 3.0271 | - | - |
| 1.0007 | 34000 | 3.0258 | 3.0261 | - |
| 1.0037 | 34100 | 3.0233 | - | - |
| 1.0066 | 34200 | 3.0283 | - | - |
| 1.0096 | 34300 | 3.0277 | - | - |
| 1.0125 | 34400 | 3.0233 | - | - |
| 1.0155 | 34500 | 3.0296 | 3.0270 | - |
| 1.0184 | 34600 | 3.0321 | - | - |
| 1.0213 | 34700 | 3.0314 | - | - |
| 1.0243 | 34800 | 3.0458 | - | - |
| 1.0272 | 34900 | 3.0415 | - | - |
| 1.0302 | 35000 | 3.0271 | 3.0261 | - |
| 1.0331 | 35100 | 3.0252 | - | - |
| 1.0361 | 35200 | 3.0327 | - | - |
| 1.0390 | 35300 | 3.0302 | - | - |
| 1.0419 | 35400 | 3.0264 | - | - |
| 1.0449 | 35500 | 3.0314 | 3.0269 | - |
| 1.0478 | 35600 | 3.0252 | - | - |
| 1.0508 | 35700 | 3.0302 | - | - |
| 1.0537 | 35800 | 3.0339 | - | - |
| 1.0567 | 35900 | 3.0277 | - | - |
| 1.0596 | 36000 | 3.0314 | 3.0232 | - |
| 1.0625 | 36100 | 3.0339 | - | - |
| 1.0655 | 36200 | 3.0233 | - | - |
| 1.0684 | 36300 | 3.0264 | - | - |
| 1.0714 | 36400 | 3.0246 | - | - |
| 1.0743 | 36500 | 3.0252 | 3.0242 | - |
| 1.0773 | 36600 | 3.027 | - | - |
| 1.0802 | 36700 | 3.0202 | - | - |
| 1.0831 | 36800 | 3.0245 | - | - |
| 1.0861 | 36900 | 3.0239 | - | - |
| 1.0890 | 37000 | 3.022 | 3.0229 | - |
| 1.0920 | 37100 | 3.0164 | - | - |
| 1.0949 | 37200 | 3.0289 | - | - |
| 1.0979 | 37300 | 3.012 | - | - |
| 1.1008 | 37400 | 3.027 | - | - |
| 1.1038 | 37500 | 3.0283 | 3.0229 | - |
| 1.1067 | 37600 | 3.0289 | - | - |
| 1.1096 | 37700 | 3.0264 | - | - |
| 1.1126 | 37800 | 3.0295 | - | - |
| 1.1155 | 37900 | 3.0245 | - | - |
| 1.1185 | 38000 | 3.0301 | 3.0226 | - |
| 1.1214 | 38100 | 3.0276 | - | - |
| 1.1244 | 38200 | 3.0264 | - | - |
| 1.1273 | 38300 | 3.0264 | - | - |
| 1.1302 | 38400 | 3.022 | - | - |
| 1.1332 | 38500 | 3.0308 | 3.0243 | - |
| 1.1361 | 38600 | 3.022 | - | - |
| 1.1391 | 38700 | 3.027 | - | - |
| 1.1420 | 38800 | 3.0189 | - | - |
| 1.1450 | 38900 | 3.0282 | - | - |
| 1.1479 | 39000 | 3.0226 | 3.0228 | - |
| 1.1508 | 39100 | 3.0257 | - | - |
| 1.1538 | 39200 | 3.0201 | - | - |
| 1.1567 | 39300 | 3.0282 | - | - |
| 1.1597 | 39400 | 3.0395 | - | - |
| 1.1626 | 39500 | 3.042 | 3.0340 | - |
| 1.1656 | 39600 | 3.0432 | - | - |
| 1.1685 | 39700 | 3.0214 | - | - |
| 1.1714 | 39800 | 3.022 | - | - |
| 1.1744 | 39900 | 3.0245 | - | - |
| 1.1773 | 40000 | 3.032 | 3.0276 | - |
| 1.1803 | 40100 | 3.0389 | - | - |
| 1.1832 | 40200 | 3.0332 | - | - |
| 1.1862 | 40300 | 3.0689 | - | - |
| 1.1891 | 40400 | 3.0476 | - | - |
| 1.1921 | 40500 | 3.0626 | 3.0399 | - |
| 1.1950 | 40600 | 3.0357 | - | - |
| 1.1979 | 40700 | 3.0282 | - | - |
| 1.2009 | 40800 | 3.0276 | - | - |
| 1.2038 | 40900 | 3.032 | - | - |
| 1.2068 | 41000 | 3.0189 | 3.0256 | - |
| 1.2097 | 41100 | 3.0276 | - | - |
| 1.2127 | 41200 | 3.0276 | - | - |
| 1.2156 | 41300 | 3.0276 | - | - |
| 1.2185 | 41400 | 3.0301 | - | - |
| 1.2215 | 41500 | 3.0238 | 3.0262 | - |
| 1.2244 | 41600 | 3.0326 | - | - |
| 1.2274 | 41700 | 3.0295 | - | - |
| 1.2303 | 41800 | 3.0307 | - | - |
| 1.2333 | 41900 | 3.0351 | - | - |
| 1.2362 | 42000 | 3.0301 | 3.0242 | - |
| 1.2391 | 42100 | 3.0238 | - | - |
| 1.2421 | 42200 | 3.0232 | - | - |
| 1.2450 | 42300 | 3.0301 | - | - |
| 1.2480 | 42400 | 3.0201 | - | - |
| 1.2509 | 42500 | 3.0295 | 3.0242 | - |
| 1.2539 | 42600 | 3.0326 | - | - |
| 1.2568 | 42700 | 3.0232 | - | - |
| 1.2597 | 42800 | 3.0213 | - | - |
| 1.2627 | 42900 | 3.0263 | - | - |
| 1.2656 | 43000 | 3.0351 | 3.0236 | - |
| 1.2686 | 43100 | 3.0295 | - | - |
| 1.2715 | 43200 | 3.0232 | - | - |
| 1.2745 | 43300 | 3.0207 | - | - |
| 1.2774 | 43400 | 3.027 | - | - |
| 1.2804 | 43500 | 3.0276 | 3.0234 | - |
| 1.2833 | 43600 | 3.0257 | - | - |
| 1.2862 | 43700 | 3.0263 | - | - |
| 1.2892 | 43800 | 3.0163 | - | - |
| 1.2921 | 43900 | 3.0282 | - | - |
| 1.2951 | 44000 | 3.0276 | 3.0270 | - |
| 1.2980 | 44100 | 3.032 | - | - |
| 1.3010 | 44200 | 3.0326 | - | - |
| 1.3039 | 44300 | 3.0288 | - | - |
| 1.3068 | 44400 | 3.0263 | - | - |
| 1.3098 | 44500 | 3.0251 | 3.0231 | - |
| 1.3127 | 44600 | 3.0188 | - | - |
| 1.3157 | 44700 | 3.0213 | - | - |
| 1.3186 | 44800 | 3.0157 | - | - |
| 1.3216 | 44900 | 3.0238 | - | - |
| 1.3245 | 45000 | 3.0263 | 3.0214 | - |
| 1.3274 | 45100 | 3.0194 | - | - |
| 1.3304 | 45200 | 3.0301 | - | - |
| 1.3333 | 45300 | 3.0232 | - | - |
| 1.3363 | 45400 | 3.0163 | - | - |
| 1.3392 | 45500 | 3.0157 | 3.0214 | - |
| 1.3422 | 45600 | 3.0219 | - | - |
| 1.3451 | 45700 | 3.0169 | - | - |
| 1.3481 | 45800 | 3.0232 | - | - |
| 1.3510 | 45900 | 3.0344 | - | - |
| 1.3539 | 46000 | 3.0219 | 3.0209 | - |
| 1.3569 | 46100 | 3.0183 | - | - |
| 1.3598 | 46200 | 3.0207 | - | - |
| 1.3628 | 46300 | 3.0351 | - | - |
| 1.3657 | 46400 | 3.0244 | - | - |
| 1.3687 | 46500 | 3.0194 | 3.0208 | - |
| 1.3716 | 46600 | 3.0176 | - | - |
| 1.3745 | 46700 | 3.0244 | - | - |
| 1.3775 | 46800 | 3.0263 | - | - |
| 1.3804 | 46900 | 3.0151 | - | - |
| 1.3834 | 47000 | 3.0226 | 3.0208 | - |
| 1.3863 | 47100 | 3.0213 | - | - |
| 1.3893 | 47200 | 3.0307 | - | - |
| 1.3922 | 47300 | 3.0244 | - | - |
| 1.3951 | 47400 | 3.0238 | - | - |
| 1.3981 | 47500 | 3.0276 | 3.0207 | - |
| 1.4010 | 47600 | 3.0282 | - | - |
| 1.4040 | 47700 | 3.0201 | - | - |
| 1.4069 | 47800 | 3.0226 | - | - |
| 1.4099 | 47900 | 3.0263 | - | - |
| 1.4128 | 48000 | 3.0213 | 3.0208 | - |
| 1.4157 | 48100 | 3.0201 | - | - |
| 1.4187 | 48200 | 3.0207 | - | - |
| 1.4216 | 48300 | 3.0288 | - | - |
| 1.4246 | 48400 | 3.0182 | - | - |
| 1.4275 | 48500 | 3.0263 | 3.0200 | - |
| 1.4305 | 48600 | 3.0207 | - | - |
| 1.4334 | 48700 | 3.0332 | - | - |
| 1.4364 | 48800 | 3.0201 | - | - |
| 1.4393 | 48900 | 3.0182 | - | - |
| 1.4422 | 49000 | 3.0188 | 3.0200 | - |
| 1.4452 | 49100 | 3.0213 | - | - |
| 1.4481 | 49200 | 3.0144 | - | - |
| 1.4511 | 49300 | 3.0257 | - | - |
| 1.4540 | 49400 | 3.0201 | - | - |
| 1.4570 | 49500 | 3.0238 | 3.0191 | - |
| 1.4599 | 49600 | 3.0294 | - | - |
| 1.4628 | 49700 | 3.0226 | - | - |
| 1.4658 | 49800 | 3.0194 | - | - |
| 1.4687 | 49900 | 3.0169 | - | - |
| 1.4717 | 50000 | 3.0207 | 3.0189 | - |
| 1.4746 | 50100 | 3.0219 | - | - |
| 1.4776 | 50200 | 3.0194 | - | - |
| 1.4805 | 50300 | 3.0126 | - | - |
| 1.4834 | 50400 | 3.0194 | - | - |
| 1.4864 | 50500 | 3.0163 | 3.0208 | - |
| 1.4893 | 50600 | 3.0182 | - | - |
| 1.4923 | 50700 | 3.0169 | - | - |
| 1.4952 | 50800 | 3.0188 | - | - |
| 1.4982 | 50900 | 3.0219 | - | - |
| 1.5011 | 51000 | 3.0169 | 3.0200 | - |
| 1.5040 | 51100 | 3.0294 | - | - |
| 1.5070 | 51200 | 3.0207 | - | - |
| 1.5099 | 51300 | 3.02 | - | - |
| 1.5129 | 51400 | 3.0207 | - | - |
| 1.5158 | 51500 | 3.0175 | 3.0196 | - |
| 1.5188 | 51600 | 3.0225 | - | - |
| 1.5217 | 51700 | 3.0213 | - | - |
| 1.5247 | 51800 | 3.02 | - | - |
| 1.5276 | 51900 | 3.0232 | - | - |
| 1.5305 | 52000 | 3.0275 | 3.0188 | - |
| 1.5335 | 52100 | 3.0169 | - | - |
| 1.5364 | 52200 | 3.02 | - | - |
| 1.5394 | 52300 | 3.0232 | - | - |
| 1.5423 | 52400 | 3.0125 | - | - |
| 1.5453 | 52500 | 3.0163 | 3.0188 | - |
| 1.5482 | 52600 | 3.0163 | - | - |
| 1.5511 | 52700 | 3.0269 | - | - |
| 1.5541 | 52800 | 3.0194 | - | - |
| 1.5570 | 52900 | 3.0238 | - | - |
| 1.5600 | 53000 | 3.02 | 3.0183 | - |
| 1.5629 | 53100 | 3.0175 | - | - |
| 1.5659 | 53200 | 3.0157 | - | - |
| 1.5688 | 53300 | 3.0157 | - | - |
| 1.5717 | 53400 | 3.0232 | - | - |
| 1.5747 | 53500 | 3.0238 | 3.0182 | - |
| 1.5776 | 53600 | 3.0207 | - | - |
| 1.5806 | 53700 | 3.0182 | - | - |
| 1.5835 | 53800 | 3.0213 | - | - |
| 1.5865 | 53900 | 3.0213 | - | - |
| 1.5894 | 54000 | 3.0125 | 3.0181 | - |
| 1.5923 | 54100 | 3.0119 | - | - |
| 1.5953 | 54200 | 3.0194 | - | - |
| 1.5982 | 54300 | 3.0125 | - | - |
| 1.6012 | 54400 | 3.0257 | - | - |
| 1.6041 | 54500 | 3.02 | 3.0181 | - |
| 1.6071 | 54600 | 3.0232 | - | - |
| 1.6100 | 54700 | 3.025 | - | - |
| 1.6130 | 54800 | 3.0263 | - | - |
| 1.6159 | 54900 | 3.0144 | - | - |
| 1.6188 | 55000 | 3.0138 | 3.0177 | - |
| 1.6218 | 55100 | 3.0207 | - | - |
| 1.6247 | 55200 | 3.015 | - | - |
| 1.6277 | 55300 | 3.0175 | - | - |
| 1.6306 | 55400 | 3.0163 | - | - |
| 1.6336 | 55500 | 3.0157 | 3.0172 | - |
| 1.6365 | 55600 | 3.01 | - | - |
| 1.6394 | 55700 | 3.0132 | - | - |
| 1.6424 | 55800 | 3.0232 | - | - |
| 1.6453 | 55900 | 3.02 | - | - |
| 1.6483 | 56000 | 3.0163 | 3.0145 | - |
| 1.6512 | 56100 | 3.0132 | - | - |
| 1.6542 | 56200 | 3.0219 | - | - |
| 1.6571 | 56300 | 3.0188 | - | - |
| 1.6600 | 56400 | 3.015 | - | - |
| 1.6630 | 56500 | 3.0157 | 3.0146 | - |
| 1.6659 | 56600 | 3.0188 | - | - |
| 1.6689 | 56700 | 3.0225 | - | - |
| 1.6718 | 56800 | 3.0094 | - | - |
| 1.6748 | 56900 | 3.0163 | - | - |
| 1.6777 | 57000 | 3.0244 | 3.0158 | - |
| 1.6806 | 57100 | 3.0157 | - | - |
| 1.6836 | 57200 | 3.0157 | - | - |
| 1.6865 | 57300 | 3.015 | - | - |
| 1.6895 | 57400 | 3.0125 | - | - |
| 1.6924 | 57500 | 3.0169 | 3.0151 | - |
| 1.6954 | 57600 | 3.02 | - | - |
| 1.6983 | 57700 | 3.0138 | - | - |
| 1.7013 | 57800 | 3.0163 | - | - |
| 1.7042 | 57900 | 3.0169 | - | - |
| 1.7071 | 58000 | 3.0169 | 3.0153 | - |
| 1.7101 | 58100 | 3.0119 | - | - |
| 1.7130 | 58200 | 3.0132 | - | - |
| 1.7160 | 58300 | 3.0138 | - | - |
| 1.7189 | 58400 | 3.0225 | - | - |
| 1.7219 | 58500 | 3.02 | 3.0148 | - |
| 1.7248 | 58600 | 3.015 | - | - |
| 1.7277 | 58700 | 3.0188 | - | - |
| 1.7307 | 58800 | 3.015 | - | - |
| 1.7336 | 58900 | 3.015 | - | - |
| 1.7366 | 59000 | 3.0082 | 3.0148 | - |
| 1.7395 | 59100 | 3.0213 | - | - |
| 1.7425 | 59200 | 3.0094 | - | - |
| 1.7454 | 59300 | 3.0188 | - | - |
| 1.7483 | 59400 | 3.0138 | - | - |
| 1.7513 | 59500 | 3.0138 | 3.0148 | - |
| 1.7542 | 59600 | 3.0188 | - | - |
| 1.7572 | 59700 | 3.0107 | - | - |
| 1.7601 | 59800 | 3.0119 | - | - |
| 1.7631 | 59900 | 3.015 | - | - |
| 1.7660 | 60000 | 3.0194 | 3.0147 | - |
| 1.7689 | 60100 | 3.0144 | - | - |
| 1.7719 | 60200 | 3.0182 | - | - |
| 1.7748 | 60300 | 3.0213 | - | - |
| 1.7778 | 60400 | 3.0144 | - | - |
| 1.7807 | 60500 | 3.0157 | 3.0147 | - |
| 1.7837 | 60600 | 3.0132 | - | - |
| 1.7866 | 60700 | 3.0163 | - | - |
| 1.7896 | 60800 | 3.0182 | - | - |
| 1.7925 | 60900 | 3.015 | - | - |
| 1.7954 | 61000 | 3.0088 | 3.0148 | - |
| 1.7984 | 61100 | 3.015 | - | - |
| 1.8013 | 61200 | 3.0144 | - | - |
| 1.8043 | 61300 | 3.0113 | - | - |
| 1.8072 | 61400 | 3.0182 | - | - |
| 1.8102 | 61500 | 3.0194 | 3.0147 | - |
| 1.8131 | 61600 | 3.02 | - | - |
| 1.8160 | 61700 | 3.0125 | - | - |
| 1.8190 | 61800 | 3.015 | - | - |
| 1.8219 | 61900 | 3.0175 | - | - |
| 1.8249 | 62000 | 3.0119 | 3.0146 | - |
| 1.8278 | 62100 | 3.0169 | - | - |
| 1.8308 | 62200 | 3.0225 | - | - |
| 1.8337 | 62300 | 3.0207 | - | - |
| 1.8366 | 62400 | 3.0169 | - | - |
| 1.8396 | 62500 | 3.0125 | 3.0170 | - |
| 1.8425 | 62600 | 3.0188 | - | - |
| 1.8455 | 62700 | 3.0157 | - | - |
| 1.8484 | 62800 | 3.0182 | - | - |
| 1.8514 | 62900 | 3.01 | - | - |
| 1.8543 | 63000 | 3.0138 | 3.0148 | - |
| 1.8572 | 63100 | 3.0094 | - | - |
| 1.8602 | 63200 | 3.0157 | - | - |
| 1.8631 | 63300 | 3.02 | - | - |
| 1.8661 | 63400 | 3.0094 | - | - |
| 1.8690 | 63500 | 3.0182 | 3.0145 | - |
| 1.8720 | 63600 | 3.0157 | - | - |
| 1.8749 | 63700 | 3.0138 | - | - |
| 1.8779 | 63800 | 3.0125 | - | - |
| 1.8808 | 63900 | 3.015 | - | - |
| 1.8837 | 64000 | 3.0075 | 3.0144 | - |
| 1.8867 | 64100 | 3.0157 | - | - |
| 1.8896 | 64200 | 3.0088 | - | - |
| 1.8926 | 64300 | 3.0225 | - | - |
| 1.8955 | 64400 | 3.0175 | - | - |
| 1.8985 | 64500 | 3.0232 | 3.0179 | - |
| 1.9014 | 64600 | 3.0257 | - | - |
| 1.9043 | 64700 | 3.0175 | - | - |
| 1.9073 | 64800 | 3.0188 | - | - |
| 1.9102 | 64900 | 3.0125 | - | - |
| 1.9132 | 65000 | 3.0225 | 3.0170 | - |
| 1.9161 | 65100 | 3.02 | - | - |
| 1.9191 | 65200 | 3.0213 | - | - |
| 1.9220 | 65300 | 3.0113 | - | - |
| 1.9249 | 65400 | 3.0182 | - | - |
| 1.9279 | 65500 | 3.0232 | 3.0169 | - |
| 1.9308 | 65600 | 3.0225 | - | - |
| 1.9338 | 65700 | 3.0181 | - | - |
| 1.9367 | 65800 | 3.0181 | - | - |
| 1.9397 | 65900 | 3.0194 | - | - |
| 1.9426 | 66000 | 3.0175 | 3.0168 | - |
| 1.9455 | 66100 | 3.0181 | - | - |
| 1.9485 | 66200 | 3.0157 | - | - |
| 1.9514 | 66300 | 3.0169 | - | - |
| 1.9544 | 66400 | 3.0181 | - | - |
| 1.9573 | 66500 | 3.0138 | 3.0152 | - |
| 1.9603 | 66600 | 3.0175 | - | - |
| 1.9632 | 66700 | 3.0156 | - | - |
| 1.9662 | 66800 | 3.0106 | - | - |
| 1.9691 | 66900 | 3.01 | - | - |
| 1.9720 | 67000 | 3.0175 | 3.0141 | - |
| 1.9750 | 67100 | 3.0144 | - | - |
| 1.9779 | 67200 | 3.0131 | - | - |
| 1.9809 | 67300 | 3.0113 | - | - |
| 1.9838 | 67400 | 3.0113 | - | - |
| 1.9868 | 67500 | 3.0125 | 3.0140 | - |
| 1.9897 | 67600 | 3.0119 | - | - |
| 1.9926 | 67700 | 3.02 | - | - |
| 1.9956 | 67800 | 3.0125 | - | - |
| 1.9985 | 67900 | 3.01 | - | - |
| 2.0015 | 68000 | 3.0156 | 3.0139 | - |
| 2.0044 | 68100 | 3.0131 | - | - |
| 2.0074 | 68200 | 3.015 | - | - |
| 2.0103 | 68300 | 3.0169 | - | - |
| 2.0132 | 68400 | 3.0169 | - | - |
| 2.0162 | 68500 | 3.0119 | 3.0139 | - |
| 2.0191 | 68600 | 3.0138 | - | - |
| 2.0221 | 68700 | 3.0138 | - | - |
| 2.0250 | 68800 | 3.0163 | - | - |
| 2.0280 | 68900 | 3.0188 | - | - |
| 2.0309 | 69000 | 3.0188 | 3.0139 | - |
| 2.0338 | 69100 | 3.01 | - | - |
| 2.0368 | 69200 | 3.015 | - | - |
| 2.0397 | 69300 | 3.0175 | - | - |
| 2.0427 | 69400 | 3.0144 | - | - |
| 2.0456 | 69500 | 3.0188 | 3.0139 | - |
| 2.0486 | 69600 | 3.0119 | - | - |
| 2.0515 | 69700 | 3.0131 | - | - |
| 2.0545 | 69800 | 3.0131 | - | - |
| 2.0574 | 69900 | 3.0144 | - | - |
| 2.0603 | 70000 | 3.0144 | 3.0139 | - |
| 2.0633 | 70100 | 3.0163 | - | - |
| 2.0662 | 70200 | 3.0069 | - | - |
| 2.0692 | 70300 | 3.0213 | - | - |
| 2.0721 | 70400 | 3.0188 | - | - |
| 2.0751 | 70500 | 3.0131 | 3.0108 | - |
| 2.0780 | 70600 | 3.0131 | - | - |
| 2.0809 | 70700 | 3.0094 | - | - |
| 2.0839 | 70800 | 3.0131 | - | - |
| 2.0868 | 70900 | 3.0119 | - | - |
| 2.0898 | 71000 | 3.0106 | 3.0117 | - |
| 2.0927 | 71100 | 3.015 | - | - |
| 2.0957 | 71200 | 3.0106 | - | - |
| 2.0986 | 71300 | 3.0106 | - | - |
| 2.1015 | 71400 | 3.0113 | - | - |
| 2.1045 | 71500 | 3.01 | 3.0117 | - |
| 2.1074 | 71600 | 3.01 | - | - |
| 2.1104 | 71700 | 3.0138 | - | - |
| 2.1133 | 71800 | 3.0088 | - | - |
| 2.1163 | 71900 | 3.0106 | - | - |
| 2.1192 | 72000 | 3.0069 | 3.0111 | - |
| 2.1221 | 72100 | 3.0056 | - | - |
| 2.1251 | 72200 | 3.0156 | - | - |
| 2.1280 | 72300 | 3.0094 | - | - |
| 2.1310 | 72400 | 3.0081 | - | - |
| 2.1339 | 72500 | 3.0125 | 3.0112 | - |
| 2.1369 | 72600 | 3.0125 | - | - |
| 2.1398 | 72700 | 3.0144 | - | - |
| 2.1428 | 72800 | 3.0156 | - | - |
| 2.1457 | 72900 | 3.0094 | - | - |
| 2.1486 | 73000 | 3.0075 | 3.0112 | - |
| 2.1516 | 73100 | 3.0119 | - | - |
| 2.1545 | 73200 | 3.0088 | - | - |
| 2.1575 | 73300 | 3.0119 | - | - |
| 2.1604 | 73400 | 3.0131 | - | - |
| 2.1634 | 73500 | 3.0094 | 3.0110 | - |
| 2.1663 | 73600 | 3.0063 | - | - |
| 2.1692 | 73700 | 3.0138 | - | - |
| 2.1722 | 73800 | 3.0094 | - | - |
| 2.1751 | 73900 | 3.0144 | - | - |
| 2.1781 | 74000 | 3.0081 | 3.0109 | - |
| 2.1810 | 74100 | 3.0138 | - | - |
| 2.1840 | 74200 | 3.0144 | - | - |
| 2.1869 | 74300 | 3.0094 | - | - |
| 2.1898 | 74400 | 3.0106 | - | - |
| 2.1928 | 74500 | 3.01 | 3.0110 | - |
| 2.1957 | 74600 | 3.0088 | - | - |
| 2.1987 | 74700 | 3.0081 | - | - |
| 2.2016 | 74800 | 3.0094 | - | - |
| 2.2046 | 74900 | 3.01 | - | - |
| 2.2075 | 75000 | 3.0181 | 3.0108 | - |
| 2.2104 | 75100 | 3.0088 | - | - |
| 2.2134 | 75200 | 3.0144 | - | - |
| 2.2163 | 75300 | 3.0131 | - | - |
| 2.2193 | 75400 | 3.01 | - | - |
| 2.2222 | 75500 | 3.0125 | 3.0112 | - |
| 2.2252 | 75600 | 3.0131 | - | - |
| 2.2281 | 75700 | 3.0125 | - | - |
| 2.2311 | 75800 | 3.01 | - | - |
| 2.2340 | 75900 | 3.01 | - | - |
| 2.2369 | 76000 | 3.0175 | 3.0112 | - |
| 2.2399 | 76100 | 3.0094 | - | - |
| 2.2428 | 76200 | 3.015 | - | - |
| 2.2458 | 76300 | 3.0075 | - | - |
| 2.2487 | 76400 | 3.0125 | - | - |
| 2.2517 | 76500 | 3.0131 | 3.0109 | - |
| 2.2546 | 76600 | 3.0175 | - | - |
| 2.2575 | 76700 | 3.0063 | - | - |
| 2.2605 | 76800 | 3.0113 | - | - |
| 2.2634 | 76900 | 3.0106 | - | - |
| 2.2664 | 77000 | 3.0106 | 3.0109 | - |
| 2.2693 | 77100 | 3.0125 | - | - |
| 2.2723 | 77200 | 3.0163 | - | - |
| 2.2752 | 77300 | 3.0081 | - | - |
| 2.2781 | 77400 | 3.0131 | - | - |
| 2.2811 | 77500 | 3.0119 | 3.0107 | - |
| 2.2840 | 77600 | 3.015 | - | - |
| 2.2870 | 77700 | 3.0125 | - | - |
| 2.2899 | 77800 | 3.0094 | - | - |
| 2.2929 | 77900 | 3.01 | - | - |
| 2.2958 | 78000 | 3.0125 | 3.0107 | - |
| 2.2987 | 78100 | 3.0113 | - | - |
| 2.3017 | 78200 | 3.01 | - | - |
| 2.3046 | 78300 | 3.0119 | - | - |
| 2.3076 | 78400 | 3.0131 | - | - |
| 2.3105 | 78500 | 3.0106 | 3.0109 | - |
| 2.3135 | 78600 | 3.0063 | - | - |
| 2.3164 | 78700 | 3.0113 | - | - |
| 2.3194 | 78800 | 3.01 | - | - |
| 2.3223 | 78900 | 3.0131 | - | - |
| 2.3252 | 79000 | 3.0088 | 3.0118 | - |
| 2.3282 | 79100 | 3.0088 | - | - |
| 2.3311 | 79200 | 3.0106 | - | - |
| 2.3341 | 79300 | 3.0081 | - | - |
| 2.3370 | 79400 | 3.0144 | - | - |
| 2.3400 | 79500 | 3.0138 | 3.0107 | - |
| 2.3429 | 79600 | 3.01 | - | - |
| 2.3458 | 79700 | 3.01 | - | - |
| 2.3488 | 79800 | 3.0144 | - | - |
| 2.3517 | 79900 | 3.01 | - | - |
| 2.3547 | 80000 | 3.0125 | 3.0104 | - |
| 2.3576 | 80100 | 3.005 | - | - |
| 2.3606 | 80200 | 3.0106 | - | - |
| 2.3635 | 80300 | 3.0094 | - | - |
| 2.3664 | 80400 | 3.0131 | - | - |
| 2.3694 | 80500 | 3.0125 | 3.0104 | - |
| 2.3723 | 80600 | 3.0106 | - | - |
| 2.3753 | 80700 | 3.01 | - | - |
| 2.3782 | 80800 | 3.0119 | - | - |
| 2.3812 | 80900 | 3.0088 | - | - |
| 2.3841 | 81000 | 3.0113 | 3.0103 | - |
| 2.3870 | 81100 | 3.0094 | - | - |
| 2.3900 | 81200 | 3.0094 | - | - |
| 2.3929 | 81300 | 3.0119 | - | - |
| 2.3959 | 81400 | 3.0094 | - | - |
| 2.3988 | 81500 | 3.0088 | 3.0103 | - |
| 2.4018 | 81600 | 3.0106 | - | - |
| 2.4047 | 81700 | 3.0088 | - | - |
| 2.4077 | 81800 | 3.005 | - | - |
| 2.4106 | 81900 | 3.0113 | - | - |
| 2.4135 | 82000 | 3.0138 | 3.0103 | - |
| 2.4165 | 82100 | 3.0106 | - | - |
| 2.4194 | 82200 | 3.0094 | - | - |
| 2.4224 | 82300 | 3.0069 | - | - |
| 2.4253 | 82400 | 3.0106 | - | - |
| 2.4283 | 82500 | 3.0106 | 3.0104 | - |
| 2.4312 | 82600 | 3.0156 | - | - |
| 2.4341 | 82700 | 3.0138 | - | - |
| 2.4371 | 82800 | 3.0113 | - | - |
| 2.4400 | 82900 | 3.01 | - | - |
| 2.4430 | 83000 | 3.0138 | 3.0104 | - |
| 2.4459 | 83100 | 3.0194 | - | - |
| 2.4489 | 83200 | 3.0075 | - | - |
| 2.4518 | 83300 | 3.0088 | - | - |
| 2.4547 | 83400 | 3.0081 | - | - |
| 2.4577 | 83500 | 3.0138 | 3.0104 | - |
| 2.4606 | 83600 | 3.0081 | - | - |
| 2.4636 | 83700 | 3.0163 | - | - |
| 2.4665 | 83800 | 3.0113 | - | - |
| 2.4695 | 83900 | 3.0063 | - | - |
| 2.4724 | 84000 | 3.0144 | 3.0103 | - |
| 2.4753 | 84100 | 3.0088 | - | - |
| 2.4783 | 84200 | 3.0144 | - | - |
| 2.4812 | 84300 | 3.0131 | - | - |
| 2.4842 | 84400 | 3.0094 | - | - |
| 2.4871 | 84500 | 3.015 | 3.0103 | - |
| 2.4901 | 84600 | 3.0106 | - | - |
| 2.4930 | 84700 | 3.0119 | - | - |
| 2.4960 | 84800 | 3.0125 | - | - |
| 2.4989 | 84900 | 3.0125 | - | - |
| 2.5018 | 85000 | 3.015 | 3.0113 | - |
| 2.5048 | 85100 | 3.0156 | - | - |
| 2.5077 | 85200 | 3.0194 | - | - |
| 2.5107 | 85300 | 3.0119 | - | - |
| 2.5136 | 85400 | 3.0075 | - | - |
| 2.5166 | 85500 | 3.0156 | 3.0103 | - |
| 2.5195 | 85600 | 3.0131 | - | - |
| 2.5224 | 85700 | 3.0044 | - | - |
| 2.5254 | 85800 | 3.0075 | - | - |
| 2.5283 | 85900 | 3.0113 | - | - |
| 2.5313 | 86000 | 3.0144 | 3.0103 | - |
| 2.5342 | 86100 | 3.0144 | - | - |
| 2.5372 | 86200 | 3.0113 | - | - |
| 2.5401 | 86300 | 3.0163 | - | - |
| 2.5430 | 86400 | 3.0169 | - | - |
| 2.5460 | 86500 | 3.01 | 3.0101 | - |
| 2.5489 | 86600 | 3.01 | - | - |
| 2.5519 | 86700 | 3.0113 | - | - |
| 2.5548 | 86800 | 3.0138 | - | - |
| 2.5578 | 86900 | 3.0113 | - | - |
| 2.5607 | 87000 | 3.0113 | 3.0101 | - |
| 2.5636 | 87100 | 3.0081 | - | - |
| 2.5666 | 87200 | 3.0069 | - | - |
| 2.5695 | 87300 | 3.0069 | - | - |
| 2.5725 | 87400 | 3.0088 | - | - |
| 2.5754 | 87500 | 3.0094 | 3.0101 | - |
| 2.5784 | 87600 | 3.0088 | - | - |
| 2.5813 | 87700 | 3.0119 | - | - |
| 2.5843 | 87800 | 3.01 | - | - |
| 2.5872 | 87900 | 3.0119 | - | - |
| 2.5901 | 88000 | 3.0125 | 3.0101 | - |
| 2.5931 | 88100 | 3.0088 | - | - |
| 2.5960 | 88200 | 3.0138 | - | - |
| 2.5990 | 88300 | 3.01 | - | - |
| 2.6019 | 88400 | 3.0119 | - | - |
| 2.6049 | 88500 | 3.0119 | 3.0102 | - |
| 2.6078 | 88600 | 3.0063 | - | - |
| 2.6107 | 88700 | 3.01 | - | - |
| 2.6137 | 88800 | 3.0125 | - | - |
| 2.6166 | 88900 | 3.0175 | - | - |
| 2.6196 | 89000 | 3.0113 | 3.0118 | - |
| 2.6225 | 89100 | 3.02 | - | - |
| 2.6255 | 89200 | 3.0194 | - | - |
| 2.6284 | 89300 | 3.0088 | - | - |
| 2.6313 | 89400 | 3.0144 | - | - |
| 2.6343 | 89500 | 3.0125 | 3.0105 | - |
| 2.6372 | 89600 | 3.0144 | - | - |
| 2.6402 | 89700 | 3.0163 | - | - |
| 2.6431 | 89800 | 3.0106 | - | - |
| 2.6461 | 89900 | 3.0131 | - | - |
| 2.6490 | 90000 | 3.0119 | 3.0101 | - |
| 2.6519 | 90100 | 3.0175 | - | - |
| 2.6549 | 90200 | 3.0106 | - | - |
| 2.6578 | 90300 | 3.0138 | - | - |
| 2.6608 | 90400 | 3.0069 | - | - |
| 2.6637 | 90500 | 3.0138 | 3.0100 | - |
| 2.6667 | 90600 | 3.0044 | - | - |
| 2.6696 | 90700 | 3.0131 | - | - |
| 2.6726 | 90800 | 3.01 | - | - |
| 2.6755 | 90900 | 3.0094 | - | - |
| 2.6784 | 91000 | 3.0094 | 3.0100 | - |
| 2.6814 | 91100 | 3.0156 | - | - |
| 2.6843 | 91200 | 3.01 | - | - |
| 2.6873 | 91300 | 3.01 | - | - |
| 2.6902 | 91400 | 3.01 | - | - |
| 2.6932 | 91500 | 3.0075 | 3.0098 | - |
| 2.6961 | 91600 | 3.0125 | - | - |
| 2.6990 | 91700 | 3.01 | - | - |
| 2.7020 | 91800 | 3.0081 | - | - |
| 2.7049 | 91900 | 3.01 | - | - |
| 2.7079 | 92000 | 3.0169 | 3.0097 | - |
| 2.7108 | 92100 | 3.01 | - | - |
| 2.7138 | 92200 | 3.0125 | - | - |
| 2.7167 | 92300 | 3.0131 | - | - |
| 2.7196 | 92400 | 3.0138 | - | - |
| 2.7226 | 92500 | 3.0156 | 3.0099 | - |
| 2.7255 | 92600 | 3.0113 | - | - |
| 2.7285 | 92700 | 3.0106 | - | - |
| 2.7314 | 92800 | 3.0125 | - | - |
| 2.7344 | 92900 | 3.0038 | - | - |
| 2.7373 | 93000 | 3.0088 | 3.0100 | - |
| 2.7403 | 93100 | 3.0081 | - | - |
| 2.7432 | 93200 | 3.0119 | - | - |
| 2.7461 | 93300 | 3.0138 | - | - |
| 2.7491 | 93400 | 3.0131 | - | - |
| 2.7520 | 93500 | 3.0106 | 3.0100 | - |
| 2.7550 | 93600 | 3.0081 | - | - |
| 2.7579 | 93700 | 3.0056 | - | - |
| 2.7609 | 93800 | 3.0106 | - | - |
| 2.7638 | 93900 | 3.0119 | - | - |
| 2.7667 | 94000 | 3.0075 | 3.0099 | - |
| 2.7697 | 94100 | 3.0119 | - | - |
| 2.7726 | 94200 | 3.0075 | - | - |
| 2.7756 | 94300 | 3.0094 | - | - |
| 2.7785 | 94400 | 3.0119 | - | - |
| 2.7815 | 94500 | 3.01 | 3.0099 | - |
| 2.7844 | 94600 | 3.0106 | - | - |
| 2.7873 | 94700 | 3.0131 | - | - |
| 2.7903 | 94800 | 3.0094 | - | - |
| 2.7932 | 94900 | 3.0075 | - | - |
| 2.7962 | 95000 | 3.0119 | 3.0098 | - |
| 2.7991 | 95100 | 3.0094 | - | - |
| 2.8021 | 95200 | 3.0138 | - | - |
| 2.8050 | 95300 | 3.0094 | - | - |
| 2.8079 | 95400 | 3.0125 | - | - |
| 2.8109 | 95500 | 3.0081 | 3.0100 | - |
| 2.8138 | 95600 | 3.0081 | - | - |
| 2.8168 | 95700 | 3.0088 | - | - |
| 2.8197 | 95800 | 3.0113 | - | - |
| 2.8227 | 95900 | 3.0075 | - | - |
| 2.8256 | 96000 | 3.0138 | 3.0097 | - |
| 2.8286 | 96100 | 3.0106 | - | - |
| 2.8315 | 96200 | 3.01 | - | - |
| 2.8344 | 96300 | 3.0119 | - | - |
| 2.8374 | 96400 | 3.0144 | - | - |
| 2.8403 | 96500 | 3.0106 | 3.0099 | - |
| 2.8433 | 96600 | 3.0094 | - | - |
| 2.8462 | 96700 | 3.0131 | - | - |
| 2.8492 | 96800 | 3.0088 | - | - |
| 2.8521 | 96900 | 3.005 | - | - |
| 2.8550 | 97000 | 3.0156 | 3.0099 | - |
| 2.8580 | 97100 | 3.0094 | - | - |
| 2.8609 | 97200 | 3.0081 | - | - |
| 2.8639 | 97300 | 3.0113 | - | - |
| 2.8668 | 97400 | 3.0138 | - | - |
| 2.8698 | 97500 | 3.0119 | 3.0096 | - |
| 2.8727 | 97600 | 3.0125 | - | - |
| 2.8756 | 97700 | 3.0094 | - | - |
| 2.8786 | 97800 | 3.0119 | - | - |
| 2.8815 | 97900 | 3.0081 | - | - |
| 2.8845 | 98000 | 3.0106 | 3.0096 | - |
| 2.8874 | 98100 | 3.0081 | - | - |
| 2.8904 | 98200 | 3.0125 | - | - |
| 2.8933 | 98300 | 3.0075 | - | - |
| 2.8962 | 98400 | 3.0119 | - | - |
| 2.8992 | 98500 | 3.0106 | 3.0096 | - |
| 2.9021 | 98600 | 3.0081 | - | - |
| 2.9051 | 98700 | 3.0094 | - | - |
| 2.9080 | 98800 | 3.0081 | - | - |
| 2.9110 | 98900 | 3.0144 | - | - |
| 2.9139 | 99000 | 3.0094 | 3.0091 | - |
| 2.9169 | 99100 | 3.0094 | - | - |
| 2.9198 | 99200 | 3.0094 | - | - |
| 2.9227 | 99300 | 3.0106 | - | - |
| 2.9257 | 99400 | 3.01 | - | - |
| 2.9286 | 99500 | 3.0113 | 3.0091 | - |
| 2.9316 | 99600 | 3.0106 | - | - |
| 2.9345 | 99700 | 3.0106 | - | - |
| 2.9375 | 99800 | 3.0094 | - | - |
| 2.9404 | 99900 | 3.0081 | - | - |
| 2.9433 | 100000 | 3.01 | 3.0091 | - |
| 2.9463 | 100100 | 3.0119 | - | - |
| 2.9492 | 100200 | 3.0106 | - | - |
| 2.9522 | 100300 | 3.0113 | - | - |
| 2.9551 | 100400 | 3.0075 | - | - |
| 2.9581 | 100500 | 3.0094 | 3.0098 | - |
| 2.9610 | 100600 | 3.0119 | - | - |
| 2.9639 | 100700 | 3.0106 | - | - |
| 2.9669 | 100800 | 3.0088 | - | - |
| 2.9698 | 100900 | 3.015 | - | - |
| 2.9728 | 101000 | 3.0106 | 3.0096 | - |
| 2.9757 | 101100 | 3.0075 | - | - |
| 2.9787 | 101200 | 3.0188 | - | - |
| 2.9816 | 101300 | 3.0088 | - | - |
| 2.9845 | 101400 | 3.0081 | - | - |
| 2.9875 | 101500 | 3.0075 | 3.0097 | - |
| 2.9904 | 101600 | 3.0119 | - | - |
| 2.9934 | 101700 | 3.01 | - | - |
| 2.9963 | 101800 | 3.0075 | - | - |
| 2.9993 | 101900 | 3.0094 | - | - |
| 3.0022 | 102000 | 3.0119 | 3.0097 | - |
| 3.0052 | 102100 | 3.0113 | - | - |
| 3.0081 | 102200 | 3.0088 | - | - |
| 3.0110 | 102300 | 3.0106 | - | - |
| 3.0140 | 102400 | 3.0113 | - | - |
| 3.0169 | 102500 | 3.015 | 3.0097 | - |
| 3.0199 | 102600 | 3.0088 | - | - |
| 3.0228 | 102700 | 3.0088 | - | - |
| 3.0258 | 102800 | 3.0106 | - | - |
| 3.0287 | 102900 | 3.0113 | - | - |
| 3.0316 | 103000 | 3.01 | 3.0094 | - |
| 3.0346 | 103100 | 3.0113 | - | - |
| 3.0375 | 103200 | 3.0125 | - | - |
| 3.0405 | 103300 | 3.0056 | - | - |
| 3.0434 | 103400 | 3.01 | - | - |
| 3.0464 | 103500 | 3.01 | 3.0094 | - |
| 3.0493 | 103600 | 3.01 | - | - |
| 3.0522 | 103700 | 3.01 | - | - |
| 3.0552 | 103800 | 3.0075 | - | - |
| 3.0581 | 103900 | 3.0063 | - | - |
| 3.0611 | 104000 | 3.015 | 3.0096 | - |
| 3.0640 | 104100 | 3.0063 | - | - |
| 3.0670 | 104200 | 3.0119 | - | - |
| 3.0699 | 104300 | 3.0088 | - | - |
| 3.0728 | 104400 | 3.0113 | - | - |
| 3.0758 | 104500 | 3.01 | 3.0095 | - |
| 3.0787 | 104600 | 3.0081 | - | - |
| 3.0817 | 104700 | 3.0094 | - | - |
| 3.0846 | 104800 | 3.0075 | - | - |
| 3.0876 | 104900 | 3.0113 | - | - |
| 3.0905 | 105000 | 3.0131 | 3.0095 | - |
| 3.0935 | 105100 | 3.0131 | - | - |
| 3.0964 | 105200 | 3.0131 | - | - |
| 3.0993 | 105300 | 3.0075 | - | - |
| 3.1023 | 105400 | 3.0119 | - | - |
| 3.1052 | 105500 | 3.0094 | 3.0092 | - |
| 3.1082 | 105600 | 3.0069 | - | - |
| 3.1111 | 105700 | 3.0063 | - | - |
| 3.1141 | 105800 | 3.0094 | - | - |
| 3.1170 | 105900 | 3.01 | - | - |
| 3.1199 | 106000 | 3.0113 | 3.0097 | - |
| 3.1229 | 106100 | 3.0056 | - | - |
| 3.1258 | 106200 | 3.01 | - | - |
| 3.1288 | 106300 | 3.0081 | - | - |
| 3.1317 | 106400 | 3.0106 | - | - |
| 3.1347 | 106500 | 3.01 | 3.0096 | - |
| 3.1376 | 106600 | 3.0069 | - | - |
| 3.1405 | 106700 | 3.0119 | - | - |
| 3.1435 | 106800 | 3.0081 | - | - |
| 3.1464 | 106900 | 3.0075 | - | - |
| 3.1494 | 107000 | 3.0081 | 3.0097 | - |
| 3.1523 | 107100 | 3.0075 | - | - |
| 3.1553 | 107200 | 3.0081 | - | - |
| 3.1582 | 107300 | 3.0125 | - | - |
| 3.1611 | 107400 | 3.0094 | - | - |
| 3.1641 | 107500 | 3.0094 | 3.0092 | - |
| 3.1670 | 107600 | 3.0175 | - | - |
| 3.1700 | 107700 | 3.01 | - | - |
| 3.1729 | 107800 | 3.0113 | - | - |
| 3.1759 | 107900 | 3.0094 | - | - |
| 3.1788 | 108000 | 3.0125 | 3.0091 | - |
| 3.1818 | 108100 | 3.0069 | - | - |
| 3.1847 | 108200 | 3.0119 | - | - |
| 3.1876 | 108300 | 3.0144 | - | - |
| 3.1906 | 108400 | 3.0075 | - | - |
| 3.1935 | 108500 | 3.0094 | 3.0097 | - |
| 3.1965 | 108600 | 3.0106 | - | - |
| 3.1994 | 108700 | 3.0144 | - | - |
| 3.2024 | 108800 | 3.0075 | - | - |
| 3.2053 | 108900 | 3.0156 | - | - |
| 3.2082 | 109000 | 3.0044 | 3.0095 | - |
| 3.2112 | 109100 | 3.01 | - | - |
| 3.2141 | 109200 | 3.0106 | - | - |
| 3.2171 | 109300 | 3.0081 | - | - |
| 3.2200 | 109400 | 3.0069 | - | - |
| 3.2230 | 109500 | 3.01 | 3.0096 | - |
| 3.2259 | 109600 | 3.01 | - | - |
| 3.2288 | 109700 | 3.0125 | - | - |
| 3.2318 | 109800 | 3.0069 | - | - |
| 3.2347 | 109900 | 3.0081 | - | - |
| 3.2377 | 110000 | 3.0088 | 3.0097 | - |
| 3.2406 | 110100 | 3.0119 | - | - |
| 3.2436 | 110200 | 3.0131 | - | - |
| 3.2465 | 110300 | 3.0119 | - | - |
| 3.2494 | 110400 | 3.0094 | - | - |
| 3.2524 | 110500 | 3.0094 | 3.0096 | - |
| 3.2553 | 110600 | 3.0144 | - | - |
| 3.2583 | 110700 | 3.0069 | - | - |
| 3.2612 | 110800 | 3.0131 | - | - |
| 3.2642 | 110900 | 3.0081 | - | - |
| 3.2671 | 111000 | 3.01 | 3.0096 | - |
| 3.2701 | 111100 | 3.01 | - | - |
| 3.2730 | 111200 | 3.01 | - | - |
| 3.2759 | 111300 | 3.0125 | - | - |
| 3.2789 | 111400 | 3.0113 | - | - |
| 3.2818 | 111500 | 3.0088 | 3.0095 | - |
| 3.2848 | 111600 | 3.0131 | - | - |
| 3.2877 | 111700 | 3.0125 | - | - |
| 3.2907 | 111800 | 3.01 | - | - |
| 3.2936 | 111900 | 3.0113 | - | - |
| 3.2965 | 112000 | 3.0044 | 3.0095 | - |
| 3.2995 | 112100 | 3.0144 | - | - |
| 3.3024 | 112200 | 3.0081 | - | - |
| 3.3054 | 112300 | 3.0106 | - | - |
| 3.3083 | 112400 | 3.0094 | - | - |
| 3.3113 | 112500 | 3.005 | 3.0095 | - |
| 3.3142 | 112600 | 3.0131 | - | - |
| 3.3171 | 112700 | 3.0081 | - | - |
| 3.3201 | 112800 | 3.0094 | - | - |
| 3.3230 | 112900 | 3.0075 | - | - |
| 3.3260 | 113000 | 3.0113 | 3.0095 | - |
| 3.3289 | 113100 | 3.0081 | - | - |
| 3.3319 | 113200 | 3.0094 | - | - |
| 3.3348 | 113300 | 3.0081 | - | - |
| 3.3377 | 113400 | 3.0106 | - | - |
| 3.3407 | 113500 | 3.0169 | 3.0095 | - |
| 3.3436 | 113600 | 3.0056 | - | - |
| 3.3466 | 113700 | 3.0081 | - | - |
| 3.3495 | 113800 | 3.0069 | - | - |
| 3.3525 | 113900 | 3.0094 | - | - |
| 3.3554 | 114000 | 3.0031 | 3.0095 | - |
| 3.3584 | 114100 | 3.0069 | - | - |
| 3.3613 | 114200 | 3.0075 | - | - |
| 3.3642 | 114300 | 3.015 | - | - |
| 3.3672 | 114400 | 3.0081 | - | - |
| 3.3701 | 114500 | 3.0094 | 3.0095 | - |
| 3.3731 | 114600 | 3.0056 | - | - |
| 3.3760 | 114700 | 3.0081 | - | - |
| 3.3790 | 114800 | 3.0119 | - | - |
| 3.3819 | 114900 | 3.0075 | - | - |
| 3.3848 | 115000 | 3.0063 | 3.0098 | - |
| 3.3878 | 115100 | 3.0144 | - | - |
| 3.3907 | 115200 | 3.0138 | - | - |
| 3.3937 | 115300 | 3.0081 | - | - |
| 3.3966 | 115400 | 3.0113 | - | - |
| 3.3996 | 115500 | 3.0138 | 3.0098 | - |
| 3.4025 | 115600 | 3.0081 | - | - |
| 3.4054 | 115700 | 3.0106 | - | - |
| 3.4084 | 115800 | 3.0088 | - | - |
| 3.4113 | 115900 | 3.0106 | - | - |
| 3.4143 | 116000 | 3.0156 | 3.0095 | - |
| 3.4172 | 116100 | 3.0119 | - | - |
| 3.4202 | 116200 | 3.01 | - | - |
| 3.4231 | 116300 | 3.0144 | - | - |
| 3.4260 | 116400 | 3.0131 | - | - |
| 3.4290 | 116500 | 3.0131 | 3.0097 | - |
| 3.4319 | 116600 | 3.0088 | - | - |
| 3.4349 | 116700 | 3.0113 | - | - |
| 3.4378 | 116800 | 3.0044 | - | - |
| 3.4408 | 116900 | 3.01 | - | - |
| 3.4437 | 117000 | 3.0069 | 3.0094 | - |
| 3.4467 | 117100 | 3.0081 | - | - |
| 3.4496 | 117200 | 3.0125 | - | - |
| 3.4525 | 117300 | 3.0069 | - | - |
| 3.4555 | 117400 | 3.0063 | - | - |
| 3.4584 | 117500 | 3.0044 | 3.0095 | - |
| 3.4614 | 117600 | 3.0119 | - | - |
| 3.4643 | 117700 | 3.0081 | - | - |
| 3.4673 | 117800 | 3.0081 | - | - |
| 3.4702 | 117900 | 3.0106 | - | - |
| 3.4731 | 118000 | 3.0125 | 3.0095 | - |
| 3.4761 | 118100 | 3.0138 | - | - |
| 3.4790 | 118200 | 3.0106 | - | - |
| 3.4820 | 118300 | 3.0144 | - | - |
| 3.4849 | 118400 | 3.0081 | - | - |
| 3.4879 | 118500 | 3.01 | 3.0095 | - |
| 3.4908 | 118600 | 3.0075 | - | - |
| 3.4937 | 118700 | 3.0056 | - | - |
| 3.4967 | 118800 | 3.0069 | - | - |
| 3.4996 | 118900 | 3.0094 | - | - |
| 3.5026 | 119000 | 3.0119 | 3.0095 | - |
| 3.5055 | 119100 | 3.0038 | - | - |
| 3.5085 | 119200 | 3.025 | - | - |
| 3.5114 | 119300 | 3.0081 | - | - |
| 3.5143 | 119400 | 3.0119 | - | - |
| 3.5173 | 119500 | 3.005 | 3.0095 | - |
| 3.5202 | 119600 | 3.01 | - | - |
| 3.5232 | 119700 | 3.0025 | - | - |
| 3.5261 | 119800 | 3.0088 | - | - |
| 3.5291 | 119900 | 3.0106 | - | - |
| 3.5320 | 120000 | 3.0138 | 3.0095 | - |
| 3.5350 | 120100 | 3.0056 | - | - |
| 3.5379 | 120200 | 3.0088 | - | - |
| 3.5408 | 120300 | 3.0125 | - | - |
| 3.5438 | 120400 | 3.0125 | - | - |
| 3.5467 | 120500 | 3.0056 | 3.0095 | - |
| 3.5497 | 120600 | 3.0131 | - | - |
| 3.5526 | 120700 | 3.0119 | - | - |
| 3.5556 | 120800 | 3.0094 | - | - |
| 3.5585 | 120900 | 3.0106 | - | - |
| 3.5614 | 121000 | 3.0113 | 3.0095 | - |
| 3.5644 | 121100 | 3.0106 | - | - |
| 3.5673 | 121200 | 3.0156 | - | - |
| 3.5703 | 121300 | 3.0069 | - | - |
| 3.5732 | 121400 | 3.0125 | - | - |
| 3.5762 | 121500 | 3.0069 | 3.0095 | - |
| 3.5791 | 121600 | 3.01 | - | - |
| 3.5820 | 121700 | 3.0119 | - | - |
| 3.5850 | 121800 | 3.0088 | - | - |
| 3.5879 | 121900 | 3.0119 | - | - |
| 3.5909 | 122000 | 3.0069 | 3.0095 | - |
| 3.5938 | 122100 | 3.0069 | - | - |
| 3.5968 | 122200 | 3.0138 | - | - |
| 3.5997 | 122300 | 3.01 | - | - |
| 3.6026 | 122400 | 3.0106 | - | - |
| 3.6056 | 122500 | 3.0113 | 3.0095 | - |
| 3.6085 | 122600 | 3.01 | - | - |
| 3.6115 | 122700 | 3.005 | - | - |
| 3.6144 | 122800 | 3.0069 | - | - |
| 3.6174 | 122900 | 3.0094 | - | - |
| 3.6203 | 123000 | 3.0119 | 3.0095 | - |
| 3.6233 | 123100 | 3.0056 | - | - |
| 3.6262 | 123200 | 3.0075 | - | - |
| 3.6291 | 123300 | 3.0106 | - | - |
| 3.6321 | 123400 | 3.005 | - | - |
| 3.6350 | 123500 | 3.0081 | 3.0095 | - |
| 3.6380 | 123600 | 3.02 | - | - |
| 3.6409 | 123700 | 3.0094 | - | - |
| 3.6439 | 123800 | 3.0119 | - | - |
| 3.6468 | 123900 | 3.0106 | - | - |
| 3.6497 | 124000 | 3.0125 | 3.0095 | - |
| 3.6527 | 124100 | 3.0125 | - | - |
| 3.6556 | 124200 | 3.0188 | - | - |
| 3.6586 | 124300 | 3.01 | - | - |
| 3.6615 | 124400 | 3.0088 | - | - |
| 3.6645 | 124500 | 3.0169 | 3.0095 | - |
| 3.6674 | 124600 | 3.0113 | - | - |
| 3.6703 | 124700 | 3.0063 | - | - |
| 3.6733 | 124800 | 3.0094 | - | - |
| 3.6762 | 124900 | 3.0038 | - | - |
| 3.6792 | 125000 | 3.0106 | 3.0091 | - |
| 3.6821 | 125100 | 3.005 | - | - |
| 3.6851 | 125200 | 3.0081 | - | - |
| 3.6880 | 125300 | 3.0075 | - | - |
| 3.6909 | 125400 | 3.0131 | - | - |
| 3.6939 | 125500 | 3.0075 | 3.0091 | - |
| 3.6968 | 125600 | 3.0131 | - | - |
| 3.6998 | 125700 | 3.01 | - | - |
| 3.7027 | 125800 | 3.0075 | - | - |
| 3.7057 | 125900 | 3.0113 | - | - |
| 3.7086 | 126000 | 3.0094 | 3.0091 | - |
| 3.7116 | 126100 | 3.0081 | - | - |
| 3.7145 | 126200 | 3.0119 | - | - |
| 3.7174 | 126300 | 3.0088 | - | - |
| 3.7204 | 126400 | 3.0063 | - | - |
| 3.7233 | 126500 | 3.0081 | 3.0091 | - |
| 3.7263 | 126600 | 3.0125 | - | - |
| 3.7292 | 126700 | 3.0125 | - | - |
| 3.7322 | 126800 | 3.0131 | - | - |
| 3.7351 | 126900 | 3.0106 | - | - |
| 3.7380 | 127000 | 3.0088 | 3.0091 | - |
| 3.7410 | 127100 | 3.0113 | - | - |
| 3.7439 | 127200 | 3.0125 | - | - |
| 3.7469 | 127300 | 3.0094 | - | - |
| 3.7498 | 127400 | 3.0069 | - | - |
| 3.7528 | 127500 | 3.0088 | 3.0091 | - |
| 3.7557 | 127600 | 3.0163 | - | - |
| 3.7586 | 127700 | 3.0094 | - | - |
| 3.7616 | 127800 | 3.0069 | - | - |
| 3.7645 | 127900 | 3.0063 | - | - |
| 3.7675 | 128000 | 3.0094 | 3.0091 | - |
| 3.7704 | 128100 | 3.01 | - | - |
| 3.7734 | 128200 | 3.015 | - | - |
| 3.7763 | 128300 | 3.0163 | - | - |
| 3.7792 | 128400 | 3.0106 | - | - |
| 3.7822 | 128500 | 3.0113 | 3.0091 | - |
| 3.7851 | 128600 | 3.0069 | - | - |
| 3.7881 | 128700 | 3.0113 | - | - |
| 3.7910 | 128800 | 3.0063 | - | - |
| 3.7940 | 128900 | 3.0088 | - | - |
| 3.7969 | 129000 | 3.0019 | 3.0091 | - |
| 3.7999 | 129100 | 3.0094 | - | - |
| 3.8028 | 129200 | 3.0038 | - | - |
| 3.8057 | 129300 | 3.0044 | - | - |
| 3.8087 | 129400 | 3.0088 | - | - |
| 3.8116 | 129500 | 3.0113 | 3.0091 | - |
| 3.8146 | 129600 | 3.0094 | - | - |
| 3.8175 | 129700 | 3.0088 | - | - |
| 3.8205 | 129800 | 3.0113 | - | - |
| 3.8234 | 129900 | 3.0094 | - | - |
| 3.8263 | 130000 | 3.0069 | 3.0091 | - |
| 3.8293 | 130100 | 3.0113 | - | - |
| 3.8322 | 130200 | 3.0081 | - | - |
| 3.8352 | 130300 | 3.0125 | - | - |
| 3.8381 | 130400 | 3.0156 | - | - |
| 3.8411 | 130500 | 3.0069 | 3.0091 | - |
| 3.8440 | 130600 | 3.0131 | - | - |
| 3.8469 | 130700 | 3.0131 | - | - |
| 3.8499 | 130800 | 3.005 | - | - |
| 3.8528 | 130900 | 3.0106 | - | - |
| 3.8558 | 131000 | 3.0119 | 3.0089 | - |
| 3.8587 | 131100 | 3.0081 | - | - |
| 3.8617 | 131200 | 3.0088 | - | - |
| 3.8646 | 131300 | 3.0075 | - | - |
| 3.8675 | 131400 | 3.0056 | - | - |
</details>
### Framework Versions
- Python: 3.8.10
- Sentence Transformers: 3.1.1
- Transformers: 4.45.1
- PyTorch: 2.4.0+cu121
- Accelerate: 0.34.2
- Datasets: 3.0.1
- Tokenizers: 0.20.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### TripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
kholiavko/reception-llama-3.1-8b-test-8-3-gguf
|
kholiavko
| 2024-10-18T09:20:34Z | 5 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-18T09:14:33Z |
---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** kholiavko
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
KONIexp/10_90_inst_exp_Llama-3_1-8B-Instruct_20241018
|
KONIexp
| 2024-10-18T09:17:45Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-18T09:13:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
PhanPhong/ner-phobert
|
PhanPhong
| 2024-10-18T09:12:26Z | 126 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-10-18T09:11:04Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
KONIexp/0_100_inst_exp_Llama-3_1-8B-Instruct_20241018
|
KONIexp
| 2024-10-18T09:11:07Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-18T09:06:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/flozi00_-_Llama-2-13b-german-assistant-v4-gguf
|
RichardErkhov
| 2024-10-18T09:10:51Z | 38 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-10-18T05:30:22Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-2-13b-german-assistant-v4 - GGUF
- Model creator: https://huggingface.co/flozi00/
- Original model: https://huggingface.co/flozi00/Llama-2-13b-german-assistant-v4/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Llama-2-13b-german-assistant-v4.Q2_K.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Llama-2-13b-german-assistant-v4-gguf/blob/main/Llama-2-13b-german-assistant-v4.Q2_K.gguf) | Q2_K | 4.55GB |
| [Llama-2-13b-german-assistant-v4.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Llama-2-13b-german-assistant-v4-gguf/blob/main/Llama-2-13b-german-assistant-v4.IQ3_XS.gguf) | IQ3_XS | 5.03GB |
| [Llama-2-13b-german-assistant-v4.IQ3_S.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Llama-2-13b-german-assistant-v4-gguf/blob/main/Llama-2-13b-german-assistant-v4.IQ3_S.gguf) | IQ3_S | 5.3GB |
| [Llama-2-13b-german-assistant-v4.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Llama-2-13b-german-assistant-v4-gguf/blob/main/Llama-2-13b-german-assistant-v4.Q3_K_S.gguf) | Q3_K_S | 5.3GB |
| [Llama-2-13b-german-assistant-v4.IQ3_M.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Llama-2-13b-german-assistant-v4-gguf/blob/main/Llama-2-13b-german-assistant-v4.IQ3_M.gguf) | IQ3_M | 5.61GB |
| [Llama-2-13b-german-assistant-v4.Q3_K.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Llama-2-13b-german-assistant-v4-gguf/blob/main/Llama-2-13b-german-assistant-v4.Q3_K.gguf) | Q3_K | 5.94GB |
| [Llama-2-13b-german-assistant-v4.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Llama-2-13b-german-assistant-v4-gguf/blob/main/Llama-2-13b-german-assistant-v4.Q3_K_M.gguf) | Q3_K_M | 5.94GB |
| [Llama-2-13b-german-assistant-v4.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Llama-2-13b-german-assistant-v4-gguf/blob/main/Llama-2-13b-german-assistant-v4.Q3_K_L.gguf) | Q3_K_L | 6.49GB |
| [Llama-2-13b-german-assistant-v4.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Llama-2-13b-german-assistant-v4-gguf/blob/main/Llama-2-13b-german-assistant-v4.IQ4_XS.gguf) | IQ4_XS | 6.57GB |
| [Llama-2-13b-german-assistant-v4.Q4_0.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Llama-2-13b-german-assistant-v4-gguf/blob/main/Llama-2-13b-german-assistant-v4.Q4_0.gguf) | Q4_0 | 6.9GB |
| [Llama-2-13b-german-assistant-v4.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Llama-2-13b-german-assistant-v4-gguf/blob/main/Llama-2-13b-german-assistant-v4.IQ4_NL.gguf) | IQ4_NL | 6.94GB |
| [Llama-2-13b-german-assistant-v4.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Llama-2-13b-german-assistant-v4-gguf/blob/main/Llama-2-13b-german-assistant-v4.Q4_K_S.gguf) | Q4_K_S | 6.95GB |
| [Llama-2-13b-german-assistant-v4.Q4_K.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Llama-2-13b-german-assistant-v4-gguf/blob/main/Llama-2-13b-german-assistant-v4.Q4_K.gguf) | Q4_K | 7.36GB |
| [Llama-2-13b-german-assistant-v4.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Llama-2-13b-german-assistant-v4-gguf/blob/main/Llama-2-13b-german-assistant-v4.Q4_K_M.gguf) | Q4_K_M | 7.36GB |
| [Llama-2-13b-german-assistant-v4.Q4_1.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Llama-2-13b-german-assistant-v4-gguf/blob/main/Llama-2-13b-german-assistant-v4.Q4_1.gguf) | Q4_1 | 7.65GB |
| [Llama-2-13b-german-assistant-v4.Q5_0.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Llama-2-13b-german-assistant-v4-gguf/blob/main/Llama-2-13b-german-assistant-v4.Q5_0.gguf) | Q5_0 | 8.4GB |
| [Llama-2-13b-german-assistant-v4.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Llama-2-13b-german-assistant-v4-gguf/blob/main/Llama-2-13b-german-assistant-v4.Q5_K_S.gguf) | Q5_K_S | 8.4GB |
| [Llama-2-13b-german-assistant-v4.Q5_K.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Llama-2-13b-german-assistant-v4-gguf/blob/main/Llama-2-13b-german-assistant-v4.Q5_K.gguf) | Q5_K | 8.64GB |
| [Llama-2-13b-german-assistant-v4.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Llama-2-13b-german-assistant-v4-gguf/blob/main/Llama-2-13b-german-assistant-v4.Q5_K_M.gguf) | Q5_K_M | 8.64GB |
| [Llama-2-13b-german-assistant-v4.Q5_1.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Llama-2-13b-german-assistant-v4-gguf/blob/main/Llama-2-13b-german-assistant-v4.Q5_1.gguf) | Q5_1 | 9.15GB |
| [Llama-2-13b-german-assistant-v4.Q6_K.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Llama-2-13b-german-assistant-v4-gguf/blob/main/Llama-2-13b-german-assistant-v4.Q6_K.gguf) | Q6_K | 9.99GB |
| [Llama-2-13b-german-assistant-v4.Q8_0.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Llama-2-13b-german-assistant-v4-gguf/blob/main/Llama-2-13b-german-assistant-v4.Q8_0.gguf) | Q8_0 | 12.94GB |
Original model description:
---
datasets:
- flozi00/conversations
language:
- en
- de
---
## This project is sponsored by [  ](https://www.primeline-solutions.com/de/server/nach-einsatzzweck/gpu-rendering-hpc/)
# Model Card
This model is an finetuned version for german instructions and conversations in style of Alpaca. "### Assistant:" "### User:"
The dataset used is deduplicated and cleaned, with no codes inside. The focus is on instruction following and conversational tasks.
The model archictecture is based on Llama version 2 with 13B parameters, trained on 100% renewable energy powered hardware.
This work is contributed by private research of [flozi00](https://huggingface.co/flozi00)
Join discussions about german llm research, and plan larger training runs together: https://join.slack.com/t/slack-dtc7771/shared_invite/zt-219keplqu-hLwjm0xcFAOX7enERfBz0Q
|
lhstest/all_inst
|
lhstest
| 2024-10-18T09:05:26Z | 79 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-10-18T07:15:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
itsx-tom/estates-testing-classifier
|
itsx-tom
| 2024-10-18T09:04:54Z | 16 | 0 | null |
[
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"region:us"
] |
image-classification
| 2024-10-18T09:04:48Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: itsx-tom/estates-testing-classifier
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.6777777671813965
---
# itsx-tom/estates-testing-classifier
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### bathroom

#### empty room

#### exterier

#### interier

#### kitchen

#### livingroom

#### partially equipped room

#### real photo

#### visualization

|
Zymed/my-fine-tuned-dm-model
|
Zymed
| 2024-10-18T09:03:06Z | 105 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neo",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-18T08:51:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/haoranxu_-_X-ALMA-13B-Pretrain-gguf
|
RichardErkhov
| 2024-10-18T08:51:36Z | 74 | 0 | null |
[
"gguf",
"arxiv:2410.03115",
"arxiv:2401.08417",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-18T04:30:12Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
X-ALMA-13B-Pretrain - GGUF
- Model creator: https://huggingface.co/haoranxu/
- Original model: https://huggingface.co/haoranxu/X-ALMA-13B-Pretrain/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [X-ALMA-13B-Pretrain.Q2_K.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_X-ALMA-13B-Pretrain-gguf/blob/main/X-ALMA-13B-Pretrain.Q2_K.gguf) | Q2_K | 4.52GB |
| [X-ALMA-13B-Pretrain.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_X-ALMA-13B-Pretrain-gguf/blob/main/X-ALMA-13B-Pretrain.IQ3_XS.gguf) | IQ3_XS | 4.99GB |
| [X-ALMA-13B-Pretrain.IQ3_S.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_X-ALMA-13B-Pretrain-gguf/blob/main/X-ALMA-13B-Pretrain.IQ3_S.gguf) | IQ3_S | 5.27GB |
| [X-ALMA-13B-Pretrain.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_X-ALMA-13B-Pretrain-gguf/blob/main/X-ALMA-13B-Pretrain.Q3_K_S.gguf) | Q3_K_S | 5.27GB |
| [X-ALMA-13B-Pretrain.IQ3_M.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_X-ALMA-13B-Pretrain-gguf/blob/main/X-ALMA-13B-Pretrain.IQ3_M.gguf) | IQ3_M | 5.57GB |
| [X-ALMA-13B-Pretrain.Q3_K.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_X-ALMA-13B-Pretrain-gguf/blob/main/X-ALMA-13B-Pretrain.Q3_K.gguf) | Q3_K | 5.9GB |
| [X-ALMA-13B-Pretrain.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_X-ALMA-13B-Pretrain-gguf/blob/main/X-ALMA-13B-Pretrain.Q3_K_M.gguf) | Q3_K_M | 5.9GB |
| [X-ALMA-13B-Pretrain.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_X-ALMA-13B-Pretrain-gguf/blob/main/X-ALMA-13B-Pretrain.Q3_K_L.gguf) | Q3_K_L | 1.37GB |
| [X-ALMA-13B-Pretrain.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_X-ALMA-13B-Pretrain-gguf/blob/main/X-ALMA-13B-Pretrain.IQ4_XS.gguf) | IQ4_XS | 0.5GB |
| [X-ALMA-13B-Pretrain.Q4_0.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_X-ALMA-13B-Pretrain-gguf/blob/main/X-ALMA-13B-Pretrain.Q4_0.gguf) | Q4_0 | 6.86GB |
| [X-ALMA-13B-Pretrain.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_X-ALMA-13B-Pretrain-gguf/blob/main/X-ALMA-13B-Pretrain.IQ4_NL.gguf) | IQ4_NL | 6.9GB |
| [X-ALMA-13B-Pretrain.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_X-ALMA-13B-Pretrain-gguf/blob/main/X-ALMA-13B-Pretrain.Q4_K_S.gguf) | Q4_K_S | 6.91GB |
| [X-ALMA-13B-Pretrain.Q4_K.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_X-ALMA-13B-Pretrain-gguf/blob/main/X-ALMA-13B-Pretrain.Q4_K.gguf) | Q4_K | 7.33GB |
| [X-ALMA-13B-Pretrain.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_X-ALMA-13B-Pretrain-gguf/blob/main/X-ALMA-13B-Pretrain.Q4_K_M.gguf) | Q4_K_M | 7.33GB |
| [X-ALMA-13B-Pretrain.Q4_1.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_X-ALMA-13B-Pretrain-gguf/blob/main/X-ALMA-13B-Pretrain.Q4_1.gguf) | Q4_1 | 7.61GB |
| [X-ALMA-13B-Pretrain.Q5_0.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_X-ALMA-13B-Pretrain-gguf/blob/main/X-ALMA-13B-Pretrain.Q5_0.gguf) | Q5_0 | 8.36GB |
| [X-ALMA-13B-Pretrain.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_X-ALMA-13B-Pretrain-gguf/blob/main/X-ALMA-13B-Pretrain.Q5_K_S.gguf) | Q5_K_S | 8.36GB |
| [X-ALMA-13B-Pretrain.Q5_K.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_X-ALMA-13B-Pretrain-gguf/blob/main/X-ALMA-13B-Pretrain.Q5_K.gguf) | Q5_K | 8.6GB |
| [X-ALMA-13B-Pretrain.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_X-ALMA-13B-Pretrain-gguf/blob/main/X-ALMA-13B-Pretrain.Q5_K_M.gguf) | Q5_K_M | 8.6GB |
| [X-ALMA-13B-Pretrain.Q5_1.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_X-ALMA-13B-Pretrain-gguf/blob/main/X-ALMA-13B-Pretrain.Q5_1.gguf) | Q5_1 | 9.1GB |
| [X-ALMA-13B-Pretrain.Q6_K.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_X-ALMA-13B-Pretrain-gguf/blob/main/X-ALMA-13B-Pretrain.Q6_K.gguf) | Q6_K | 9.95GB |
| [X-ALMA-13B-Pretrain.Q8_0.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_X-ALMA-13B-Pretrain-gguf/blob/main/X-ALMA-13B-Pretrain.Q8_0.gguf) | Q8_0 | 12.88GB |
Original model description:
---
license: mit
datasets:
- oscar-corpus/OSCAR-2301
- allenai/nllb
- Helsinki-NLP/opus-100
language:
- en
- da
- nl
- de
- is
- 'no'
- sc
- af
- ca
- ro
- gl
- it
- pt
- es
- bg
- mk
- sr
- uk
- ru
- id
- ms
- th
- vi
- mg
- fr
- hu
- el
- cs
- pl
- lt
- lv
- ka
- zh
- ja
- ko
- fi
- et
- gu
- hi
- mr
- ne
- ur
- az
- kk
- ky
- tr
- uz
- ar
- he
- fa
base_model:
- haoranxu/ALMA-13B-Pretrain
---
[X-ALMA](https://arxiv.org/pdf/2410.03115) builds upon [ALMA-R](https://arxiv.org/pdf/2401.08417) by expanding support from 6 to 50 languages. It utilizes a plug-and-play architecture with language-specific modules, complemented by a carefully designed training recipe. This release includes the **X-ALMA pre-trained base model**.
```
@misc{xu2024xalmaplugplay,
title={X-ALMA: Plug & Play Modules and Adaptive Rejection for Quality Translation at Scale},
author={Haoran Xu and Kenton Murray and Philipp Koehn and Hieu Hoang and Akiko Eriguchi and Huda Khayrallah},
year={2024},
eprint={2410.03115},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2410.03115},
}
```
X-ALMA-13B-Pretrain is pre-trained on 50 languages: en,da,nl,de,is,no,sv,af,ca,ro,gl,it,pt,es,bg,mk,sr,uk,ru,id,ms,th,vi,mg,fr,hu,el,cs,pl,lt,lv,ka,zh,ja,ko,fi,et,gu,hi,mr,ne,ur,az,kk,ky,tr,uz,ar,he,fa.
All X-ALMA checkpoints are released at huggingface:
| Models | Model Link | Description |
|:-------------:|:---------------:|:---------------:|
| X-ALMA | [haoranxu/X-ALMA]([https://huggingface.co/haoranxu/ALMA-7B](https://huggingface.co/haoranxu/X-ALMA)) | X-ALMA model with all its modules |
| X-ALMA-13B-Pretrain | [haoranxu/X-ALMA-13B-Pretrain](https://huggingface.co/haoranxu/X-ALMA-13B-Pretrain) | X-ALMA 13B multilingual pre-trained base model |
| X-ALMA-Group1 | [haoranxu/X-ALMA-13B-Group1](https://huggingface.co/haoranxu/X-ALMA-13B-Group1) | X-ALMA group1 specific module and the merged model |
| X-ALMA-Group2 | [haoranxu/X-ALMA-13B-Group2](https://huggingface.co/haoranxu/X-ALMA-13B-Group2) | X-ALMA group2 specific module and the merged model |
| X-ALMA-Group3 | [haoranxu/X-ALMA-13B-Group3](https://huggingface.co/haoranxu/X-ALMA-13B-Group3) | X-ALMA group3 specific module and the merged model |
| X-ALMA-Group4 | [haoranxu/X-ALMA-13B-Group4](https://huggingface.co/haoranxu/X-ALMA-13B-Group4) | X-ALMA group4 specific module and the merged model |
| X-ALMA-Group5 | [haoranxu/X-ALMA-13B-Group5](https://huggingface.co/haoranxu/X-ALMA-13B-Group5) | X-ALMA group5 specific module and the merged model |
| X-ALMA-Group6 | [haoranxu/X-ALMA-13B-Group6](https://huggingface.co/haoranxu/X-ALMA-13B-Group6) | X-ALMA group6 specific module and the merged model |
| X-ALMA-Group7 | [haoranxu/X-ALMA-13B-Group7](https://huggingface.co/haoranxu/X-ALMA-13B-Group7) | X-ALMA group7 specific module and the merged model |
| X-ALMA-Group8 | [haoranxu/X-ALMA-13B-Group8](https://huggingface.co/haoranxu/X-ALMA-13B-Group8) | X-ALMA group8 specific module and the merged model |
## A quick start:
There are three ways to load X-ALMA for translation. An example of translating "我爱机器翻译。" into English (X-ALMA should also able to do multilingual open-ended QA).
**The first way**: loading the merged model where the language-specific module has been merged into the base model **(Recommended)**:
```
import torch
from transformers import AutoModelForCausalLM
from transformers import AutoTokenizer
from peft import PeftModel
GROUP2LANG = {
1: ["da", "nl", "de", "is", "no", "sv", "af"],
2: ["ca", "ro", "gl", "it", "pt", "es"],
3: ["bg", "mk", "sr", "uk", "ru"],
4: ["id", "ms", "th", "vi", "mg", "fr"],
5: ["hu", "el", "cs", "pl", "lt", "lv"],
6: ["ka", "zh", "ja", "ko", "fi", "et"],
7: ["gu", "hi", "mr", "ne", "ur"],
8: ["az", "kk", "ky", "tr", "uz", "ar", "he", "fa"],
}
LANG2GROUP = {lang: str(group) for group, langs in GROUP2LANG.items() for lang in langs}
group_id = LANG2GROUP["zh"]
model = AutoModelForCausalLM.from_pretrained(f"haoranxu/X-ALMA-13B-Group{group_id}", torch_dtype=torch.float16, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(f"haoranxu/X-ALMA-13B-Group{group_id}", padding_side='left')
# Add the source sentence into the prompt template
prompt="Translate this from Chinese to English:\nChinese: 我爱机器翻译。\nEnglish:"
# X-ALMA needs chat template but ALMA and ALMA-R don't need it.
chat_style_prompt = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(chat_style_prompt, tokenize=False, add_generation_prompt=True)
input_ids = tokenizer(prompt, return_tensors="pt", padding=True, max_length=40, truncation=True).input_ids.cuda()
# Translation
with torch.no_grad():
generated_ids = model.generate(input_ids=input_ids, num_beams=5, max_new_tokens=20, do_sample=True, temperature=0.6, top_p=0.9)
outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
print(outputs)
```
**The second way**: loading the base model and language-specific module **(Recommended)**:
```
model = AutoModelForCausalLM.from_pretrained("haoranxu/X-ALMA-13B-Pretrain", torch_dtype=torch.float16, device_map="auto")
model = PeftModel.from_pretrained(model, f"haoranxu/X-ALMA-13B-Group{group_id}")
tokenizer = AutoTokenizer.from_pretrained(f"haoranxu/X-ALMA-13B-Group{group_id}", padding_side='left')
```
**The third way**: loading the base model with all language-specific modules like MoE: (Require large GPU memory)
```
from modeling_xalma import XALMAForCausalLM
model = XALMAForCausalLM.from_pretrained("haoranxu/X-ALMA", torch_dtype=torch.float16, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("haoranxu/X-ALMA", padding_side='left')
# Add `lang="zh"`: specify the language to instruct the model on which group to use for the third loading method during generation.
generated_ids = model.generate(input_ids=input_ids, num_beams=5, max_new_tokens=20, do_sample=True, temperature=0.6, top_p=0.9, lang="zh")
```
|
ai4bharat/hercule-ur-lora
|
ai4bharat
| 2024-10-18T08:51:09Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ur",
"arxiv:2410.13394",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-19T10:38:34Z |
---
library_name: transformers
license: mit
language:
- ur
metrics:
- pearsonr
- spearmanr
- accuracy
base_model:
- meta-llama/Llama-3.1-8B-Instruct
pipeline_tag: text-generation
---
# Model Card for Hercule
Hercule is a cross-lingual evaluation model introduced as part of the CIA Suite to assess multilingual Large Language Models (LLMs). It addresses the challenge of evaluating multilingual LLMs by using English reference responses to score multilingual outputs.
Fine-tuned on the INTEL dataset, Hercule demonstrates better alignment with human judgments compared to zero-shot evaluations by proprietary models like GPT-4, on the RECON test set. It excels particularly in low-resource scenarios and supports zero-shot evaluations on unseen languages. The model employs reference-based evaluation, providing feedback and scores on a 1-5 scale, and highlights the effectiveness of lightweight fine-tuning methods (like LoRA) for efficient multilingual evaluation. All FFT models and LoRA weights are available [here](https://huggingface.co/collections/ai4bharat/cia-suite-66ea9a7e18a6c70bd8de27a1).
# Model Details
## Model Description
- **Model type:** Evaluator Language model
- **Language(s) (NLP):** Hindi
- **Related Models:** [Hercule Models](https://huggingface.co/collections/ai4bharat/cia-suite-66ea9a7e18a6c70bd8de27a1)
- **Resources for more information:**
- [Research paper](https://arxiv.org/abs/2410.13394)
- [GitHub Repo](https://github.com/AI4Bharat/CIA)
Hercule in fine-tuned on [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) using Intel training data and evaluated on Recon test set. Models for other languages are available in [CIA Suite](https://huggingface.co/collections/ai4bharat/cia-suite-66ea9a7e18a6c70bd8de27a1).
## Prompt Format
We’ve developed wrapper functions and classes to make it easy to work with Hercule. Check them out on our [github repository](https://github.com/AI4Bharat/CIA) – we highly recommend using them!
If you only need to use the model for your specific use case, please follow the prompt format provided below.
### Reference Guided Direct Assessment
The Hercule model expects four input components: an evaluation instruction (multilingual), a response to evaluate (multilingual), a scoring rubric (English), and a reference answer (English). Use the prompt format provided below, ensuring that you include the instruction, response, reference answer, evaluation criteria, and a detailed score rubric for each score from 1 to 5.
After running inference with HERCULE, the output will include feedback and a score, separated by the phrase ```[RESULT]```.
```
###Task Description:
An instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given.
1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general.
2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric.
3. The output format should look as follows: \"Feedback: (write a feedback for criteria) [RESULT] (an integer number between 1 and 5)\"
4. Please do not generate any other opening, closing, and explanations.
###The instruction to evaluate:
{instruction}
###Response to evaluate:
{response}
###Reference Answer (Score 5):
{reference_answer}
###Score Rubrics:
[{criteria}]
Score 1: {score1_rubric}
Score 2: {score2_rubric}
Score 3: {score3_rubric}
Score 4: {score4_rubric}
Score 5: {score5_rubric}
###Feedback:
```
We use the same evaluation prompt as used in [Prometheus 2](https://huggingface.co/prometheus-eval/prometheus-7b-v2.0).
## Links for Reference
- **Repository**: https://github.com/AI4Bharat/CIA
- **Paper**: https://arxiv.org/abs/2410.13394
- **Point of Contact**: [email protected], [email protected]
## License
Intel training data is created from [Feedback Collection](https://huggingface.co/datasets/prometheus-eval/Feedback-Collection) which is subject to OpenAI's Terms of Use for the generated data. If you suspect any violations, please reach out to us.
# Citation
If you find the following model helpful, please consider citing our paper!
**BibTeX:**
```bibtex
@article{doddapaneni2024crosslingual,
title = {Cross-Lingual Auto Evaluation for Assessing Multilingual LLMs},
author = {Sumanth Doddapaneni and Mohammed Safi Ur Rahman Khan and Dilip Venkatesh and Raj Dabre and Anoop Kunchukuttan and Mitesh M. Khapra},
year = {2024},
journal = {arXiv preprint arXiv: 2410.13394}
}
```
|
ai4bharat/hercule-te-lora
|
ai4bharat
| 2024-10-18T08:51:01Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"te",
"arxiv:2410.13394",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-19T10:31:50Z |
---
library_name: transformers
license: mit
language:
- te
metrics:
- pearsonr
- spearmanr
- accuracy
base_model:
- meta-llama/Llama-3.1-8B-Instruct
pipeline_tag: text-generation
---
# Model Card for Hercule
Hercule is a cross-lingual evaluation model introduced as part of the CIA Suite to assess multilingual Large Language Models (LLMs). It addresses the challenge of evaluating multilingual LLMs by using English reference responses to score multilingual outputs.
Fine-tuned on the INTEL dataset, Hercule demonstrates better alignment with human judgments compared to zero-shot evaluations by proprietary models like GPT-4, on the RECON test set. It excels particularly in low-resource scenarios and supports zero-shot evaluations on unseen languages. The model employs reference-based evaluation, providing feedback and scores on a 1-5 scale, and highlights the effectiveness of lightweight fine-tuning methods (like LoRA) for efficient multilingual evaluation. All FFT models and LoRA weights are available [here](https://huggingface.co/collections/ai4bharat/cia-suite-66ea9a7e18a6c70bd8de27a1).
# Model Details
## Model Description
- **Model type:** Evaluator Language model
- **Language(s) (NLP):** Telugu
- **Related Models:** [Hercule Models](https://huggingface.co/collections/ai4bharat/cia-suite-66ea9a7e18a6c70bd8de27a1)
- **Resources for more information:**
- [Research paper](https://arxiv.org/abs/2410.13394)
- [GitHub Repo](https://github.com/AI4Bharat/CIA)
Hercule in fine-tuned on [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) using Intel training data and evaluated on Recon test set. Models for other languages are available in [CIA Suite](https://huggingface.co/collections/ai4bharat/cia-suite-66ea9a7e18a6c70bd8de27a1).
## Prompt Format
We’ve developed wrapper functions and classes to make it easy to work with Hercule. Check them out on our [github repository](https://github.com/AI4Bharat/CIA) – we highly recommend using them!
If you only need to use the model for your specific use case, please follow the prompt format provided below.
### Reference Guided Direct Assessment
The Hercule model expects four input components: an evaluation instruction (multilingual), a response to evaluate (multilingual), a scoring rubric (English), and a reference answer (English). Use the prompt format provided below, ensuring that you include the instruction, response, reference answer, evaluation criteria, and a detailed score rubric for each score from 1 to 5.
After running inference with HERCULE, the output will include feedback and a score, separated by the phrase ```[RESULT]```.
```
###Task Description:
An instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given.
1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general.
2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric.
3. The output format should look as follows: \"Feedback: (write a feedback for criteria) [RESULT] (an integer number between 1 and 5)\"
4. Please do not generate any other opening, closing, and explanations.
###The instruction to evaluate:
{instruction}
###Response to evaluate:
{response}
###Reference Answer (Score 5):
{reference_answer}
###Score Rubrics:
[{criteria}]
Score 1: {score1_rubric}
Score 2: {score2_rubric}
Score 3: {score3_rubric}
Score 4: {score4_rubric}
Score 5: {score5_rubric}
###Feedback:
```
We use the same evaluation prompt as used in [Prometheus 2](https://huggingface.co/prometheus-eval/prometheus-7b-v2.0).
## Links for Reference
- **Repository**: https://github.com/AI4Bharat/CIA
- **Paper**: https://arxiv.org/abs/2410.13394
- **Point of Contact**: [email protected], [email protected]
## License
Intel training data is created from [Feedback Collection](https://huggingface.co/datasets/prometheus-eval/Feedback-Collection) which is subject to OpenAI's Terms of Use for the generated data. If you suspect any violations, please reach out to us.
# Citation
If you find the following model helpful, please consider citing our paper!
**BibTeX:**
```bibtex
@article{doddapaneni2024crosslingual,
title = {Cross-Lingual Auto Evaluation for Assessing Multilingual LLMs},
author = {Sumanth Doddapaneni and Mohammed Safi Ur Rahman Khan and Dilip Venkatesh and Raj Dabre and Anoop Kunchukuttan and Mitesh M. Khapra},
year = {2024},
journal = {arXiv preprint arXiv: 2410.13394}
}
```
|
ai4bharat/hercule-bn-lora
|
ai4bharat
| 2024-10-18T08:50:54Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"bn",
"arxiv:2410.13394",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-19T12:48:16Z |
---
library_name: transformers
license: mit
language:
- bn
metrics:
- pearsonr
- spearmanr
- accuracy
base_model:
- meta-llama/Llama-3.1-8B-Instruct
pipeline_tag: text-generation
---
# Model Card for Hercule
Hercule is a cross-lingual evaluation model introduced as part of the CIA Suite to assess multilingual Large Language Models (LLMs). It addresses the challenge of evaluating multilingual LLMs by using English reference responses to score multilingual outputs.
Fine-tuned on the INTEL dataset, Hercule demonstrates better alignment with human judgments compared to zero-shot evaluations by proprietary models like GPT-4, on the RECON test set. It excels particularly in low-resource scenarios and supports zero-shot evaluations on unseen languages. The model employs reference-based evaluation, providing feedback and scores on a 1-5 scale, and highlights the effectiveness of lightweight fine-tuning methods (like LoRA) for efficient multilingual evaluation. All FFT models and LoRA weights are available [here](https://huggingface.co/collections/ai4bharat/cia-suite-66ea9a7e18a6c70bd8de27a1).
# Model Details
## Model Description
- **Model type:** Evaluator Language model
- **Language(s) (NLP):** Bengali
- **Related Models:** [Hercule Models](https://huggingface.co/collections/ai4bharat/cia-suite-66ea9a7e18a6c70bd8de27a1)
- **Resources for more information:**
- [Research paper](https://arxiv.org/abs/2410.13394)
- [GitHub Repo](https://github.com/AI4Bharat/CIA)
Hercule in fine-tuned on [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) using Intel training data and evaluated on Recon test set. Models for other languages are available in [CIA Suite](https://huggingface.co/collections/ai4bharat/cia-suite-66ea9a7e18a6c70bd8de27a1).
## Prompt Format
We’ve developed wrapper functions and classes to make it easy to work with Hercule. Check them out on our [github repository](https://github.com/AI4Bharat/CIA) – we highly recommend using them!
If you only need to use the model for your specific use case, please follow the prompt format provided below.
### Reference Guided Direct Assessment
The Hercule model expects four input components: an evaluation instruction (multilingual), a response to evaluate (multilingual), a scoring rubric (English), and a reference answer (English). Use the prompt format provided below, ensuring that you include the instruction, response, reference answer, evaluation criteria, and a detailed score rubric for each score from 1 to 5.
After running inference with HERCULE, the output will include feedback and a score, separated by the phrase ```[RESULT]```.
```
###Task Description:
An instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given.
1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general.
2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric.
3. The output format should look as follows: \"Feedback: (write a feedback for criteria) [RESULT] (an integer number between 1 and 5)\"
4. Please do not generate any other opening, closing, and explanations.
###The instruction to evaluate:
{instruction}
###Response to evaluate:
{response}
###Reference Answer (Score 5):
{reference_answer}
###Score Rubrics:
[{criteria}]
Score 1: {score1_rubric}
Score 2: {score2_rubric}
Score 3: {score3_rubric}
Score 4: {score4_rubric}
Score 5: {score5_rubric}
###Feedback:
```
We use the same evaluation prompt as used in [Prometheus 2](https://huggingface.co/prometheus-eval/prometheus-7b-v2.0).
## Links for Reference
- **Repository**: https://github.com/AI4Bharat/CIA
- **Paper**: https://arxiv.org/abs/2410.13394
- **Point of Contact**: [email protected], [email protected]
## License
Intel training data is created from [Feedback Collection](https://huggingface.co/datasets/prometheus-eval/Feedback-Collection) which is subject to OpenAI's Terms of Use for the generated data. If you suspect any violations, please reach out to us.
# Citation
If you find the following model helpful, please consider citing our paper!
**BibTeX:**
```bibtex
@article{doddapaneni2024crosslingual,
title = {Cross-Lingual Auto Evaluation for Assessing Multilingual LLMs},
author = {Sumanth Doddapaneni and Mohammed Safi Ur Rahman Khan and Dilip Venkatesh and Raj Dabre and Anoop Kunchukuttan and Mitesh M. Khapra},
year = {2024},
journal = {arXiv preprint arXiv: 2410.13394}
}
```
|
RichardErkhov/marcov_-_Qwen2.5-1.5B-Instruct-databricks-dolly-15k-gguf
|
RichardErkhov
| 2024-10-18T08:50:15Z | 8 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-18T08:22:10Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qwen2.5-1.5B-Instruct-databricks-dolly-15k - GGUF
- Model creator: https://huggingface.co/marcov/
- Original model: https://huggingface.co/marcov/Qwen2.5-1.5B-Instruct-databricks-dolly-15k/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Qwen2.5-1.5B-Instruct-databricks-dolly-15k.Q2_K.gguf](https://huggingface.co/RichardErkhov/marcov_-_Qwen2.5-1.5B-Instruct-databricks-dolly-15k-gguf/blob/main/Qwen2.5-1.5B-Instruct-databricks-dolly-15k.Q2_K.gguf) | Q2_K | 0.63GB |
| [Qwen2.5-1.5B-Instruct-databricks-dolly-15k.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/marcov_-_Qwen2.5-1.5B-Instruct-databricks-dolly-15k-gguf/blob/main/Qwen2.5-1.5B-Instruct-databricks-dolly-15k.IQ3_XS.gguf) | IQ3_XS | 0.68GB |
| [Qwen2.5-1.5B-Instruct-databricks-dolly-15k.IQ3_S.gguf](https://huggingface.co/RichardErkhov/marcov_-_Qwen2.5-1.5B-Instruct-databricks-dolly-15k-gguf/blob/main/Qwen2.5-1.5B-Instruct-databricks-dolly-15k.IQ3_S.gguf) | IQ3_S | 0.71GB |
| [Qwen2.5-1.5B-Instruct-databricks-dolly-15k.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/marcov_-_Qwen2.5-1.5B-Instruct-databricks-dolly-15k-gguf/blob/main/Qwen2.5-1.5B-Instruct-databricks-dolly-15k.Q3_K_S.gguf) | Q3_K_S | 0.71GB |
| [Qwen2.5-1.5B-Instruct-databricks-dolly-15k.IQ3_M.gguf](https://huggingface.co/RichardErkhov/marcov_-_Qwen2.5-1.5B-Instruct-databricks-dolly-15k-gguf/blob/main/Qwen2.5-1.5B-Instruct-databricks-dolly-15k.IQ3_M.gguf) | IQ3_M | 0.72GB |
| [Qwen2.5-1.5B-Instruct-databricks-dolly-15k.Q3_K.gguf](https://huggingface.co/RichardErkhov/marcov_-_Qwen2.5-1.5B-Instruct-databricks-dolly-15k-gguf/blob/main/Qwen2.5-1.5B-Instruct-databricks-dolly-15k.Q3_K.gguf) | Q3_K | 0.77GB |
| [Qwen2.5-1.5B-Instruct-databricks-dolly-15k.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/marcov_-_Qwen2.5-1.5B-Instruct-databricks-dolly-15k-gguf/blob/main/Qwen2.5-1.5B-Instruct-databricks-dolly-15k.Q3_K_M.gguf) | Q3_K_M | 0.77GB |
| [Qwen2.5-1.5B-Instruct-databricks-dolly-15k.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/marcov_-_Qwen2.5-1.5B-Instruct-databricks-dolly-15k-gguf/blob/main/Qwen2.5-1.5B-Instruct-databricks-dolly-15k.Q3_K_L.gguf) | Q3_K_L | 0.82GB |
| [Qwen2.5-1.5B-Instruct-databricks-dolly-15k.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/marcov_-_Qwen2.5-1.5B-Instruct-databricks-dolly-15k-gguf/blob/main/Qwen2.5-1.5B-Instruct-databricks-dolly-15k.IQ4_XS.gguf) | IQ4_XS | 0.84GB |
| [Qwen2.5-1.5B-Instruct-databricks-dolly-15k.Q4_0.gguf](https://huggingface.co/RichardErkhov/marcov_-_Qwen2.5-1.5B-Instruct-databricks-dolly-15k-gguf/blob/main/Qwen2.5-1.5B-Instruct-databricks-dolly-15k.Q4_0.gguf) | Q4_0 | 0.87GB |
| [Qwen2.5-1.5B-Instruct-databricks-dolly-15k.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/marcov_-_Qwen2.5-1.5B-Instruct-databricks-dolly-15k-gguf/blob/main/Qwen2.5-1.5B-Instruct-databricks-dolly-15k.IQ4_NL.gguf) | IQ4_NL | 0.88GB |
| [Qwen2.5-1.5B-Instruct-databricks-dolly-15k.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/marcov_-_Qwen2.5-1.5B-Instruct-databricks-dolly-15k-gguf/blob/main/Qwen2.5-1.5B-Instruct-databricks-dolly-15k.Q4_K_S.gguf) | Q4_K_S | 0.88GB |
| [Qwen2.5-1.5B-Instruct-databricks-dolly-15k.Q4_K.gguf](https://huggingface.co/RichardErkhov/marcov_-_Qwen2.5-1.5B-Instruct-databricks-dolly-15k-gguf/blob/main/Qwen2.5-1.5B-Instruct-databricks-dolly-15k.Q4_K.gguf) | Q4_K | 0.92GB |
| [Qwen2.5-1.5B-Instruct-databricks-dolly-15k.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/marcov_-_Qwen2.5-1.5B-Instruct-databricks-dolly-15k-gguf/blob/main/Qwen2.5-1.5B-Instruct-databricks-dolly-15k.Q4_K_M.gguf) | Q4_K_M | 0.92GB |
| [Qwen2.5-1.5B-Instruct-databricks-dolly-15k.Q4_1.gguf](https://huggingface.co/RichardErkhov/marcov_-_Qwen2.5-1.5B-Instruct-databricks-dolly-15k-gguf/blob/main/Qwen2.5-1.5B-Instruct-databricks-dolly-15k.Q4_1.gguf) | Q4_1 | 0.95GB |
| [Qwen2.5-1.5B-Instruct-databricks-dolly-15k.Q5_0.gguf](https://huggingface.co/RichardErkhov/marcov_-_Qwen2.5-1.5B-Instruct-databricks-dolly-15k-gguf/blob/main/Qwen2.5-1.5B-Instruct-databricks-dolly-15k.Q5_0.gguf) | Q5_0 | 1.02GB |
| [Qwen2.5-1.5B-Instruct-databricks-dolly-15k.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/marcov_-_Qwen2.5-1.5B-Instruct-databricks-dolly-15k-gguf/blob/main/Qwen2.5-1.5B-Instruct-databricks-dolly-15k.Q5_K_S.gguf) | Q5_K_S | 1.02GB |
| [Qwen2.5-1.5B-Instruct-databricks-dolly-15k.Q5_K.gguf](https://huggingface.co/RichardErkhov/marcov_-_Qwen2.5-1.5B-Instruct-databricks-dolly-15k-gguf/blob/main/Qwen2.5-1.5B-Instruct-databricks-dolly-15k.Q5_K.gguf) | Q5_K | 1.05GB |
| [Qwen2.5-1.5B-Instruct-databricks-dolly-15k.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/marcov_-_Qwen2.5-1.5B-Instruct-databricks-dolly-15k-gguf/blob/main/Qwen2.5-1.5B-Instruct-databricks-dolly-15k.Q5_K_M.gguf) | Q5_K_M | 1.05GB |
| [Qwen2.5-1.5B-Instruct-databricks-dolly-15k.Q5_1.gguf](https://huggingface.co/RichardErkhov/marcov_-_Qwen2.5-1.5B-Instruct-databricks-dolly-15k-gguf/blob/main/Qwen2.5-1.5B-Instruct-databricks-dolly-15k.Q5_1.gguf) | Q5_1 | 1.1GB |
| [Qwen2.5-1.5B-Instruct-databricks-dolly-15k.Q6_K.gguf](https://huggingface.co/RichardErkhov/marcov_-_Qwen2.5-1.5B-Instruct-databricks-dolly-15k-gguf/blob/main/Qwen2.5-1.5B-Instruct-databricks-dolly-15k.Q6_K.gguf) | Q6_K | 1.19GB |
| [Qwen2.5-1.5B-Instruct-databricks-dolly-15k.Q8_0.gguf](https://huggingface.co/RichardErkhov/marcov_-_Qwen2.5-1.5B-Instruct-databricks-dolly-15k-gguf/blob/main/Qwen2.5-1.5B-Instruct-databricks-dolly-15k.Q8_0.gguf) | Q8_0 | 1.53GB |
Original model description:
---
base_model: Qwen/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-databricks-dolly-15k
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-databricks-dolly-15k
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="marcov/Qwen2.5-1.5B-Instruct-databricks-dolly-15k", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.45.1
- Pytorch: 2.4.1
- Datasets: 3.0.1
- Tokenizers: 0.20.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ai4bharat/hercule-ur
|
ai4bharat
| 2024-10-18T08:50:01Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ur",
"arxiv:2410.13394",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-18T09:56:38Z |
---
library_name: transformers
license: mit
language:
- ur
metrics:
- pearsonr
- spearmanr
- accuracy
base_model:
- meta-llama/Llama-3.1-8B-Instruct
pipeline_tag: text-generation
---
# Model Card for Hercule
Hercule is a cross-lingual evaluation model introduced as part of the CIA Suite to assess multilingual Large Language Models (LLMs). It addresses the challenge of evaluating multilingual LLMs by using English reference responses to score multilingual outputs.
Fine-tuned on the INTEL dataset, Hercule demonstrates better alignment with human judgments compared to zero-shot evaluations by proprietary models like GPT-4, on the RECON test set. It excels particularly in low-resource scenarios and supports zero-shot evaluations on unseen languages. The model employs reference-based evaluation, providing feedback and scores on a 1-5 scale, and highlights the effectiveness of lightweight fine-tuning methods (like LoRA) for efficient multilingual evaluation. All FFT models and LoRA weights are available [here](https://huggingface.co/collections/ai4bharat/cia-suite-66ea9a7e18a6c70bd8de27a1).
# Model Details
## Model Description
- **Model type:** Evaluator Language model
- **Language(s) (NLP):** Urdu
- **Related Models:** [Hercule Models](https://huggingface.co/collections/ai4bharat/cia-suite-66ea9a7e18a6c70bd8de27a1)
- **Resources for more information:**
- [Research paper](https://arxiv.org/abs/2410.13394)
- [GitHub Repo](https://github.com/AI4Bharat/CIA)
Hercule in fine-tuned on [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) using Intel training data and evaluated on Recon test set. Models for other languages are available in [CIA Suite](https://huggingface.co/collections/ai4bharat/cia-suite-66ea9a7e18a6c70bd8de27a1).
## Prompt Format
We’ve developed wrapper functions and classes to make it easy to work with Hercule. Check them out on our [github repository](https://github.com/AI4Bharat/CIA) – we highly recommend using them!
If you only need to use the model for your specific use case, please follow the prompt format provided below.
### Reference Guided Direct Assessment
The Hercule model expects four input components: an evaluation instruction (multilingual), a response to evaluate (multilingual), a scoring rubric (English), and a reference answer (English). Use the prompt format provided below, ensuring that you include the instruction, response, reference answer, evaluation criteria, and a detailed score rubric for each score from 1 to 5.
After running inference with HERCULE, the output will include feedback and a score, separated by the phrase ```[RESULT]```.
```
###Task Description:
An instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given.
1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general.
2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric.
3. The output format should look as follows: \"Feedback: (write a feedback for criteria) [RESULT] (an integer number between 1 and 5)\"
4. Please do not generate any other opening, closing, and explanations.
###The instruction to evaluate:
{instruction}
###Response to evaluate:
{response}
###Reference Answer (Score 5):
{reference_answer}
###Score Rubrics:
[{criteria}]
Score 1: {score1_rubric}
Score 2: {score2_rubric}
Score 3: {score3_rubric}
Score 4: {score4_rubric}
Score 5: {score5_rubric}
###Feedback:
```
We use the same evaluation prompt as used in [Prometheus 2](https://huggingface.co/prometheus-eval/prometheus-7b-v2.0).
## Links for Reference
- **Repository**: https://github.com/AI4Bharat/CIA
- **Paper**: https://arxiv.org/abs/2410.13394
- **Point of Contact**: [email protected], [email protected]
## License
Intel training data is created from [Feedback Collection](https://huggingface.co/datasets/prometheus-eval/Feedback-Collection) which is subject to OpenAI's Terms of Use for the generated data. If you suspect any violations, please reach out to us.
# Citation
If you find the following model helpful, please consider citing our paper!
**BibTeX:**
```bibtex
@article{doddapaneni2024crosslingual,
title = {Cross-Lingual Auto Evaluation for Assessing Multilingual LLMs},
author = {Sumanth Doddapaneni and Mohammed Safi Ur Rahman Khan and Dilip Venkatesh and Raj Dabre and Anoop Kunchukuttan and Mitesh M. Khapra},
year = {2024},
journal = {arXiv preprint arXiv: 2410.13394}
}
```
|
djuna/TEST-L3.1-8B-linear
|
djuna
| 2024-10-18T08:49:13Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2203.05482",
"base_model:ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2",
"base_model:merge:ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2",
"base_model:HiroseKoichi/Llama-3-8B-Stroganoff-4.0",
"base_model:merge:HiroseKoichi/Llama-3-8B-Stroganoff-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-18T08:41:40Z |
---
base_model:
- HiroseKoichi/Llama-3-8B-Stroganoff-4.0
- ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method using [ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2](https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2) as a base.
### Models Merged
The following models were included in the merge:
* [HiroseKoichi/Llama-3-8B-Stroganoff-4.0](https://huggingface.co/HiroseKoichi/Llama-3-8B-Stroganoff-4.0)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: bfloat16
tokenizer_source: base
merge_method: linear
base_model: ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2
models:
- model: HiroseKoichi/Llama-3-8B-Stroganoff-4.0
parameters:
weight:
- filter: v_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: o_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: up_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: gate_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: down_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- value: 0
- model: ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2
parameters:
weight:
- filter: v_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: o_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: up_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: gate_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: down_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- value: 1
```
|
BroAlanTaps/GPT2-large-4-58000steps
|
BroAlanTaps
| 2024-10-18T08:45:30Z | 136 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-18T08:43:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BroAlanTaps/Llama3-instruct-4-58000steps
|
BroAlanTaps
| 2024-10-18T08:43:31Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-18T08:41:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
missingstuffedbun/finetuning-sentiment-model-3000-samples
|
missingstuffedbun
| 2024-10-18T08:40:56Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-18T08:35:12Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3543
- Accuracy: 0.8667
- F1: 0.8718
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
Aguf/Aba_GLM
|
Aguf
| 2024-10-18T08:34:14Z | 7 | 1 | null |
[
"safetensors",
"chatglm",
"custom_code",
"arxiv:1910.09700",
"license:apache-2.0",
"region:us"
] | null | 2024-10-17T15:08:08Z |
---
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mirari/segmentationCompleto
|
mirari
| 2024-10-18T08:30:31Z | 49 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"pyannet",
"speaker-diarization",
"speaker-segmentation",
"generated_from_trainer",
"spa",
"dataset:pyannote/segmentation",
"endpoints_compatible",
"region:us"
] | null | 2024-10-18T08:22:17Z |
---
library_name: transformers
language:
- spa
tags:
- speaker-diarization
- speaker-segmentation
- generated_from_trainer
datasets:
- pyannote/segmentation
model-index:
- name: segmentation-3.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segmentation-3.0
This model is a fine-tuned version of [](https://huggingface.co/) on the pyannote/segmentation dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6638
- Der: 0.2932
- False Alarm: 0.2540
- Missed Detection: 0.0387
- Confusion: 0.0004
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Der | False Alarm | Missed Detection | Confusion |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-----------:|:----------------:|:---------:|
| 0.3145 | 1.0 | 282 | 0.5751 | 0.2944 | 0.2487 | 0.0454 | 0.0003 |
| 0.3087 | 2.0 | 564 | 0.5957 | 0.2912 | 0.2462 | 0.0440 | 0.0010 |
| 0.2905 | 3.0 | 846 | 0.6614 | 0.2970 | 0.2627 | 0.0333 | 0.0010 |
| 0.2733 | 4.0 | 1128 | 0.6626 | 0.2940 | 0.2558 | 0.0378 | 0.0004 |
| 0.2672 | 5.0 | 1410 | 0.6638 | 0.2932 | 0.2540 | 0.0387 | 0.0004 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.1
- Datasets 3.0.1
- Tokenizers 0.20.0
|
naver-ai/deit_large_patch16_LS
|
naver-ai
| 2024-10-18T08:22:12Z | 8 | 0 | null |
[
"pytorch",
"image-classification",
"dataset:ILSVRC/imagenet-1k",
"arxiv:2403.13298",
"license:bsd-3-clause",
"region:us"
] |
image-classification
| 2024-10-16T01:39:19Z |
---
license: bsd-3-clause
datasets:
- ILSVRC/imagenet-1k
pipeline_tag: image-classification
---
# Model Card
<!-- Provide a quick summary of what the model is/does. -->
ImageNet-1k DeiT-iii pre-trained model for baseline performance
## Rotary Position Embedding for Vision Transformer [ECCV 2024]
- **Repository:** https://github.com/naver-ai/rope-vit
- **Paper:** https://arxiv.org/abs/2403.13298
## Citation
```
@inproceedings{heo2024ropevit,
title={Rotary Position Embedding for Vision Transformer},
author={Heo, Byeongho and Park, Song and Han, Dongyoon and Yun, Sangdoo},
year={2024},
booktitle={European Conference on Computer Vision (ECCV)},
}
```
|
RichardErkhov/speakleash_-_Bielik-11B-v2.1-Instruct-gguf
|
RichardErkhov
| 2024-10-18T08:18:35Z | 11 | 0 | null |
[
"gguf",
"arxiv:2005.01643",
"arxiv:2309.11235",
"arxiv:2006.09092",
"arxiv:2402.13228",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-18T04:27:32Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Bielik-11B-v2.1-Instruct - GGUF
- Model creator: https://huggingface.co/speakleash/
- Original model: https://huggingface.co/speakleash/Bielik-11B-v2.1-Instruct/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Bielik-11B-v2.1-Instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-11B-v2.1-Instruct-gguf/blob/main/Bielik-11B-v2.1-Instruct.Q2_K.gguf) | Q2_K | 3.88GB |
| [Bielik-11B-v2.1-Instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-11B-v2.1-Instruct-gguf/blob/main/Bielik-11B-v2.1-Instruct.IQ3_XS.gguf) | IQ3_XS | 4.31GB |
| [Bielik-11B-v2.1-Instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-11B-v2.1-Instruct-gguf/blob/main/Bielik-11B-v2.1-Instruct.IQ3_S.gguf) | IQ3_S | 4.55GB |
| [Bielik-11B-v2.1-Instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-11B-v2.1-Instruct-gguf/blob/main/Bielik-11B-v2.1-Instruct.Q3_K_S.gguf) | Q3_K_S | 4.52GB |
| [Bielik-11B-v2.1-Instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-11B-v2.1-Instruct-gguf/blob/main/Bielik-11B-v2.1-Instruct.IQ3_M.gguf) | IQ3_M | 4.69GB |
| [Bielik-11B-v2.1-Instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-11B-v2.1-Instruct-gguf/blob/main/Bielik-11B-v2.1-Instruct.Q3_K.gguf) | Q3_K | 5.03GB |
| [Bielik-11B-v2.1-Instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-11B-v2.1-Instruct-gguf/blob/main/Bielik-11B-v2.1-Instruct.Q3_K_M.gguf) | Q3_K_M | 5.03GB |
| [Bielik-11B-v2.1-Instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-11B-v2.1-Instruct-gguf/blob/main/Bielik-11B-v2.1-Instruct.Q3_K_L.gguf) | Q3_K_L | 5.48GB |
| [Bielik-11B-v2.1-Instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-11B-v2.1-Instruct-gguf/blob/main/Bielik-11B-v2.1-Instruct.IQ4_XS.gguf) | IQ4_XS | 5.65GB |
| [Bielik-11B-v2.1-Instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-11B-v2.1-Instruct-gguf/blob/main/Bielik-11B-v2.1-Instruct.Q4_0.gguf) | Q4_0 | 0.65GB |
| [Bielik-11B-v2.1-Instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-11B-v2.1-Instruct-gguf/blob/main/Bielik-11B-v2.1-Instruct.IQ4_NL.gguf) | IQ4_NL | 5.95GB |
| [Bielik-11B-v2.1-Instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-11B-v2.1-Instruct-gguf/blob/main/Bielik-11B-v2.1-Instruct.Q4_K_S.gguf) | Q4_K_S | 5.93GB |
| [Bielik-11B-v2.1-Instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-11B-v2.1-Instruct-gguf/blob/main/Bielik-11B-v2.1-Instruct.Q4_K.gguf) | Q4_K | 6.26GB |
| [Bielik-11B-v2.1-Instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-11B-v2.1-Instruct-gguf/blob/main/Bielik-11B-v2.1-Instruct.Q4_K_M.gguf) | Q4_K_M | 6.26GB |
| [Bielik-11B-v2.1-Instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-11B-v2.1-Instruct-gguf/blob/main/Bielik-11B-v2.1-Instruct.Q4_1.gguf) | Q4_1 | 6.53GB |
| [Bielik-11B-v2.1-Instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-11B-v2.1-Instruct-gguf/blob/main/Bielik-11B-v2.1-Instruct.Q5_0.gguf) | Q5_0 | 7.17GB |
| [Bielik-11B-v2.1-Instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-11B-v2.1-Instruct-gguf/blob/main/Bielik-11B-v2.1-Instruct.Q5_K_S.gguf) | Q5_K_S | 7.17GB |
| [Bielik-11B-v2.1-Instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-11B-v2.1-Instruct-gguf/blob/main/Bielik-11B-v2.1-Instruct.Q5_K.gguf) | Q5_K | 7.36GB |
| [Bielik-11B-v2.1-Instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-11B-v2.1-Instruct-gguf/blob/main/Bielik-11B-v2.1-Instruct.Q5_K_M.gguf) | Q5_K_M | 7.36GB |
| [Bielik-11B-v2.1-Instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-11B-v2.1-Instruct-gguf/blob/main/Bielik-11B-v2.1-Instruct.Q5_1.gguf) | Q5_1 | 7.81GB |
| [Bielik-11B-v2.1-Instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-11B-v2.1-Instruct-gguf/blob/main/Bielik-11B-v2.1-Instruct.Q6_K.gguf) | Q6_K | 8.53GB |
| [Bielik-11B-v2.1-Instruct.Q8_0.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-11B-v2.1-Instruct-gguf/blob/main/Bielik-11B-v2.1-Instruct.Q8_0.gguf) | Q8_0 | 11.05GB |
Original model description:
---
license: apache-2.0
base_model: speakleash/Bielik-11B-v2
language:
- pl
library_name: transformers
tags:
- finetuned
inference:
parameters:
temperature: 0.2
widget:
- messages:
- role: user
content: Co przedstawia polskie godło?
extra_gated_description: If you want to learn more about how you can use the model, please refer to our <a href="https://bielik.ai/terms/">Terms of Use</a>.
---
<p align="center">
<img src="https://huggingface.co/speakleash/Bielik-11B-v2.1-Instruct/raw/main/speakleash_cyfronet.png">
</p>
# Bielik-11B-v2.1-Instruct
Bielik-11B-v2.1-Instruct is a generative text model featuring 11 billion parameters.
It is an instruct fine-tuned version of the [Bielik-11B-v2](https://huggingface.co/speakleash/Bielik-11B-v2).
Forementioned model stands as a testament to the unique collaboration between the open-science/open-souce project SpeakLeash and the High Performance Computing (HPC) center: ACK Cyfronet AGH.
Developed and trained on Polish text corpora, which has been cherry-picked and processed by the SpeakLeash team, this endeavor leverages Polish large-scale computing infrastructure,
specifically within the PLGrid environment, and more precisely, the HPC centers: ACK Cyfronet AGH.
The creation and training of the Bielik-11B-v2.1-Instruct was propelled by the support of computational grant number PLG/2024/016951, conducted on the Athena and Helios supercomputer,
enabling the use of cutting-edge technology and computational resources essential for large-scale machine learning processes.
As a result, the model exhibits an exceptional ability to understand and process the Polish language, providing accurate responses and performing a variety of linguistic tasks with high precision.
🗣️ Chat Arena<span style="color:red;">*</span>: https://arena.speakleash.org.pl/
<span style="color:red;">*</span>Chat Arena is a platform for testing and comparing different AI language models, allowing users to evaluate their performance and quality.
## Model
The [SpeakLeash](https://speakleash.org/) team is working on their own set of instructions in Polish, which is continuously being expanded and refined by annotators. A portion of these instructions, which had been manually verified and corrected, has been utilized for training purposes. Moreover, due to the limited availability of high-quality instructions in Polish, synthetic instructions were generated with [Mixtral 8x22B](https://huggingface.co/mistralai/Mixtral-8x22B-v0.1) and used in training. The dataset used for training comprised over 20 million instructions, consisting of more than 10 billion tokens. The instructions varied in quality, leading to a deterioration in the model’s performance. To counteract this while still allowing ourselves to utilize the aforementioned datasets, several improvements were introduced:
* Weighted tokens level loss - a strategy inspired by [offline reinforcement learning](https://arxiv.org/abs/2005.01643) and [C-RLFT](https://arxiv.org/abs/2309.11235)
* Adaptive learning rate inspired by the study on [Learning Rates as a Function of Batch Size](https://arxiv.org/abs/2006.09092)
* Masked prompt tokens
To align the model with user preferences we tested many different techniques: DPO, PPO, KTO, SiMPO. Finally the [DPO-Positive](https://arxiv.org/abs/2402.13228) method was employed, utilizing both generated and manually corrected examples, which were scored by a metamodel. A dataset comprising over 60,000 examples of varying lengths to address different aspects of response style. It was filtered and evaluated by the reward model to select instructions with the right level of difference between chosen and rejected. The novelty introduced in DPO-P was multi-turn conversations introduction.
Bielik-11B-v2.1-Instruct has been trained with the use of an original open source framework called [ALLaMo](https://github.com/chrisociepa/allamo) implemented by [Krzysztof Ociepa](https://www.linkedin.com/in/krzysztof-ociepa-44886550/). This framework allows users to train language models with architecture similar to LLaMA and Mistral in fast and efficient way.
### Model description:
* **Developed by:** [SpeakLeash](https://speakleash.org/) & [ACK Cyfronet AGH](https://www.cyfronet.pl/)
* **Language:** Polish
* **Model type:** causal decoder-only
* **Finetuned from:** [Bielik-11B-v2](https://huggingface.co/speakleash/Bielik-11B-v2)
* **License:** Apache 2.0 and [Terms of Use](https://bielik.ai/terms/)
* **Model ref:** speakleash:a05d7fe0995e191985a863b48a39259b
### Quantized models:
We know that some people want to explore smaller models or don't have the resources to run a full model. Therefore, we have prepared quantized versions of the Bielik-11B-v2.1-Instruct model in separate repositories:
- [GGUF - Q4_K_M, Q5_K_M, Q6_K, Q8_0](https://huggingface.co/speakleash/Bielik-11B-v2.1-Instruct-GGUF)
- [GPTQ - 4bit](https://huggingface.co/speakleash/Bielik-11B-v2.1-Instruct-GPTQ)
- [FP8](https://huggingface.co/speakleash/Bielik-11B-v2.1-Instruct-FP8) (vLLM, SGLang - Ada Lovelace, Hopper optimized)
- [GGUF - experimental - IQ imatrix IQ1_M, IQ2_XXS, IQ3_XXS, IQ4_XS and calibrated Q4_K_M, Q5_K_M, Q6_K, Q8_0](https://huggingface.co/speakleash/Bielik-11B-v2.1-Instruct-GGUF-IQ-Imatrix)
Please note that quantized models may offer lower quality of generated answers compared to full sized variatns.
### Chat template
Bielik-11B-v2.1-Instruct uses [ChatML](https://github.com/cognitivecomputations/OpenChatML) as the prompt format.
E.g.
```
prompt = "<s><|im_start|> user\nJakie mamy pory roku?<|im_end|> \n<|im_start|> assistant\n"
completion = "W Polsce mamy 4 pory roku: wiosna, lato, jesień i zima.<|im_end|> \n"
```
This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model_name = "speakleash/Bielik-11B-v2.1-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16)
messages = [
{"role": "system", "content": "Odpowiadaj krótko, precyzyjnie i wyłącznie w języku polskim."},
{"role": "user", "content": "Jakie mamy pory roku w Polsce?"},
{"role": "assistant", "content": "W Polsce mamy 4 pory roku: wiosna, lato, jesień i zima."},
{"role": "user", "content": "Która jest najcieplejsza?"}
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = input_ids.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
Fully formated input conversation by apply_chat_template from previous example:
```
<s><|im_start|> system
Odpowiadaj krótko, precyzyjnie i wyłącznie w języku polskim.<|im_end|>
<|im_start|> user
Jakie mamy pory roku w Polsce?<|im_end|>
<|im_start|> assistant
W Polsce mamy 4 pory roku: wiosna, lato, jesień i zima.<|im_end|>
<|im_start|> user
Która jest najcieplejsza?<|im_end|>
```
## Evaluation
Bielik-11B-v2.1-Instruct has been evaluated on several benchmarks to assess its performance across various tasks and languages. These benchmarks include:
1. Open PL LLM Leaderboard
2. Open LLM Leaderboard
3. Polish MT-Bench
4. Polish EQ-Bench (Emotional Intelligence Benchmark)
5. MixEval
The following sections provide detailed results for each of these benchmarks, demonstrating the model's capabilities in both Polish and English language tasks.
### Open PL LLM Leaderboard
Models have been evaluated on [Open PL LLM Leaderboard](https://huggingface.co/spaces/speakleash/open_pl_llm_leaderboard) 5-shot. The benchmark evaluates models in NLP tasks like sentiment analysis, categorization, text classification but does not test chatting skills. Average column is an average score among all tasks normalized by baseline scores.
| Model | Parameters (B)| Average |
|---------------------------------|------------|---------|
| Meta-Llama-3.1-405B-Instruct-FP8,API | 405 | 69.44 |
| Mistral-Large-Instruct-2407 | 123 | 69.11 |
| Qwen2-72B-Instruct | 72 | 65.87 |
| Bielik-11B-v2.2-Instruct | 11 | 65.57 |
| Meta-Llama-3.1-70B-Instruct | 70 | 65.49 |
| **Bielik-11B-v2.1-Instruct** | **11** | **65.45** |
| Mixtral-8x22B-Instruct-v0.1 | 141 | 65.23 |
| Bielik-11B-v2.0-Instruct | 11 | 64.98 |
| Meta-Llama-3-70B-Instruct | 70 | 64.45 |
| Athene-70B | 70 | 63.65 |
| WizardLM-2-8x22B | 141 | 62.35 |
| Qwen1.5-72B-Chat | 72 | 58.67 |
| Qwen2-57B-A14B-Instruct | 57 | 56.89 |
| glm-4-9b-chat | 9 | 56.61 |
| aya-23-35B | 35 | 56.37 |
| Phi-3.5-MoE-instruct | 41.9 | 56.34 |
| openchat-3.5-0106-gemma | 7 | 55.69 |
| Mistral-Nemo-Instruct-2407 | 12 | 55.27 |
| SOLAR-10.7B-Instruct-v1.0 | 10.7 | 55.24 |
| Mixtral-8x7B-Instruct-v0.1 | 46.7 | 55.07 |
| Bielik-7B-Instruct-v0.1 | 7 | 44.70 |
| trurl-2-13b-academic | 13 | 36.28 |
| trurl-2-7b | 7 | 26.93 |
The results from the Open PL LLM Leaderboard demonstrate the exceptional performance of Bielik-11B-v2.1-Instruct:
1. Superior performance in its class: Bielik-11B-v2.1-Instruct outperforms all other models with less than 70B parameters. This is a significant achievement, showcasing its efficiency and effectiveness despite having fewer parameters than many competitors.
2. Competitive with larger models: with a score of 65.45, Bielik-11B-v2.1-Instruct performs on par with models in the 70B parameter range. This indicates that it achieves comparable results to much larger models, demonstrating its advanced architecture and training methodology.
3. Substantial improvement over previous version: the model shows a marked improvement over its predecessor, Bielik-7B-Instruct-v0.1, which scored 43.64. This leap in performance highlights the successful enhancements and optimizations implemented in this newer version.
4. Leading position for Polish language models: in the context of Polish language models, Bielik-11B-v2.1-Instruct stands out as a leader. There are no other competitive models specifically tailored for the Polish language that match its performance, making it a crucial resource for Polish NLP tasks.
These results underscore Bielik-11B-v2.1-Instruct's position as a state-of-the-art model for Polish language processing, offering high performance with relatively modest computational requirements.
#### Open PL LLM Leaderboard - Generative Tasks Performance
This section presents a focused comparison of generative Polish language task performance between Bielik models and GPT-3.5. The evaluation is limited to generative tasks due to the constraints of assessing OpenAI models. The comprehensive nature and associated costs of the benchmark explain the limited number of models evaluated.
| Model | Parameters (B) | Average g |
|-------------------------------|----------------|---------------|
| **Bielik-11B-v2.1-Instruct** | 11 | **66.58** |
| Bielik-11B-v2.2-Instruct | 11 | 66.11 |
| Bielik-11B-v2.0-Instruct | 11 | 65.58 |
| gpt-3.5-turbo-instruct | Unknown | 55.65 |
The performance variation among Bielik versions is minimal, indicating consistent quality across iterations. Bielik-11B-v2.1-Instruct demonstrates an impressive 19.6% performance advantage over GPT-3.5.
### Open LLM Leaderboard
The Open LLM Leaderboard evaluates models on various English language tasks, providing insights into the model's performance across different linguistic challenges.
| Model | AVG | arc_challenge | hellaswag | truthfulqa_mc2 | mmlu | winogrande | gsm8k |
|--------------------------|-------|---------------|-----------|----------------|-------|------------|-------|
| Bielik-11B-v2.2-Instruct | 69.86 | 59.90 | 80.16 | 58.34 | 64.34 | 75.30 | 81.12 |
| **Bielik-11B-v2.1-Instruct** | **69.82** | 59.56 | 80.20 | 59.35 | 64.18 | 75.06 | 80.59 |
| Bielik-11B-v2.0-Instruct | 68.04 | 58.62 | 78.65 | 54.65 | 63.71 | 76.32 | 76.27 |
| Bielik-11B-v2 | 65.87 | 60.58 | 79.84 | 46.13 | 63.06 | 77.82 | 67.78 |
| Mistral-7B-Instruct-v0.2 | 65.71 | 63.14 | 84.88 | 68.26 | 60.78 | 77.19 | 40.03 |
| Bielik-7B-Instruct-v0.1 | 51.26 | 47.53 | 68.91 | 49.47 | 46.18 | 65.51 | 29.95 |
Bielik-11B-v2.1-Instruct shows impressive performance on English language tasks:
1. Significant improvement over its base model (4-point increase).
2. Substantial 18-point improvement over Bielik-7B-Instruct-v0.1.
These results demonstrate Bielik-11B-v2.1-Instruct's versatility in both Polish and English, highlighting the effectiveness of its instruction tuning process.
### Polish MT-Bench
The Bielik-11B-v2.1-Instruct (16 bit) model was also evaluated using the MT-Bench benchmark. The quality of the model was evaluated using the English version (original version without modifications) and the Polish version created by Speakleash (tasks and evaluation in Polish, the content of the tasks was also changed to take into account the context of the Polish language).
#### MT-Bench English
| Model | Score |
|-----------------|----------|
| **Bielik-11B-v2.1** | **8.537500** |
| Bielik-11B-v2.2 | 8.390625 |
| Bielik-11B-v2.0 | 8.159375 |
#### MT-Bench Polish
| Model | Parameters (B) | Score |
|-------------------------------------|----------------|----------|
| Qwen2-72B-Instruct | 72 | 8.775000 |
| Mistral-Large-Instruct-2407 (123B) | 123 | 8.662500 |
| gemma-2-27b-it | 27 | 8.618750 |
| Mixtral-8x22b | 141 | 8.231250 |
| Meta-Llama-3.1-405B-Instruct | 405 | 8.168750 |
| Meta-Llama-3.1-70B-Instruct | 70 | 8.150000 |
| Bielik-11B-v2.2-Instruct | 11 | 8.115625 |
| **Bielik-11B-v2.1-Instruct** | **11** | **7.996875** |
| gpt-3.5-turbo | Unknown | 7.868750 |
| Mixtral-8x7b | 46.7 | 7.637500 |
| Bielik-11B-v2.0-Instruct | 11 | 7.562500 |
| Mistral-Nemo-Instruct-2407 | 12 | 7.368750 |
| openchat-3.5-0106-gemma | 7 | 6.812500 |
| Mistral-7B-Instruct-v0.2 | 7 | 6.556250 |
| Meta-Llama-3.1-8B-Instruct | 8 | 6.556250 |
| Bielik-7B-Instruct-v0.1 | 7 | 6.081250 |
| Mistral-7B-Instruct-v0.3 | 7 | 5.818750 |
| Polka-Mistral-7B-SFT | 7 | 4.518750 |
| trurl-2-7b | 7 | 2.762500 |
Key observations on Bielik-11B-v2.1 performance:
1. Strong performance among mid-sized models: Bielik-11B-v2.1-Instruct scored **7.996875**, placing it ahead of several well-known models like GPT-3.5-turbo (7.868750) and Mixtral-8x7b (7.637500). This indicates that Bielik-11B-v2.1-Instruct is competitive among mid-sized models, particularly those in the 11B-70B parameter range.
2. Competitive against larger models: Bielik-11B-v2.1-Instruct performs close to Meta-Llama-3.1-70B-Instruct (8.150000), Meta-Llama-3.1-405B-Instruct (8.168750) and even Mixtral-8x22b (8.231250), which have significantly more parameters. This efficiency in performance relative to size could make it an attractive option for tasks where resource constraints are a consideration. Bielik 100% generated answers in Polish, while other models (not typically trained for Polish) can answer Polish questions in English.
3. Significant improvement over previous versions: compared to its predecessor, **Bielik-7B-Instruct-v0.1**, which scored **6.081250**, the Bielik-11B-v2.1-Instruct shows a significant improvement. The score increased by almost **2 points**, highlighting substantial advancements in model quality, optimization and training methodology.
For more information - answers to test tasks and values in each category, visit the [MT-Bench PL](https://huggingface.co/spaces/speakleash/mt-bench-pl) website.
### Polish EQ-Bench
[Polish Emotional Intelligence Benchmark for LLMs](https://huggingface.co/spaces/speakleash/polish_eq-bench)
| Model | Parameters (B) | Score |
|-------------------------------|--------|-------|
| Mistral-Large-Instruct-2407 | 123 | 78.07 |
| Meta-Llama-3.1-405B-Instruct-FP8 | 405 | 77.23 |
| gpt-4o-2024-08-06 | ? | 75.15 |
| gpt-4-turbo-2024-04-09 | ? | 74.59 |
| Meta-Llama-3.1-70B-Instruct | 70 | 72.53 |
| Qwen2-72B-Instruct | 72 | 71.23 |
| Meta-Llama-3-70B-Instruct | 70 | 71.21 |
| gpt-4o-mini-2024-07-18 | ? | 71.15 |
| WizardLM-2-8x22B | 141 | 69.56 |
| Bielik-11B-v2.2-Instruct | 11 | 69.05 |
| Bielik-11B-v2.0-Instruct | 11 | 68.24 |
| Qwen1.5-72B-Chat | 72 | 68.03 |
| Mixtral-8x22B-Instruct-v0.1 | 141 | 67.63 |
| **Bielik-11B-v2.1-Instruct** | **11** | **60.07** |
| Qwen1.5-32B-Chat | 32 | 59.63 |
| openchat-3.5-0106-gemma | 7 | 59.58 |
| aya-23-35B | 35 | 58.41 |
| gpt-3.5-turbo | ? | 57.7 |
| Qwen2-57B-A14B-Instruct | 57 | 57.64 |
| Mixtral-8x7B-Instruct-v0.1 | 47 | 57.61 |
| SOLAR-10.7B-Instruct-v1.0 | 10.7 | 55.21 |
| Mistral-7B-Instruct-v0.2 | 7 | 47.02 |
### MixEval
MixEval is a ground-truth-based English benchmark designed to evaluate Large Language Models (LLMs) efficiently and effectively. Key features of MixEval include:
1. Derived from off-the-shelf benchmark mixtures
2. Highly capable model ranking with a 0.96 correlation to Chatbot Arena
3. Local and quick execution, requiring only 6% of the time and cost compared to running MMLU
This benchmark provides a robust and time-efficient method for assessing LLM performance, making it a valuable tool for ongoing model evaluation and comparison.
| Model | MixEval | MixEval-Hard |
|-------------------------------|---------|--------------|
| **Bielik-11B-v2.1-Instruct** | **74.55** | **45.00** |
| Bielik-11B-v2.2-Instruct | 72.35 | 39.65 |
| Bielik-11B-v2.0-Instruct | 72.10 | 40.20 |
| Mistral-7B-Instruct-v0.2 | 70.00 | 36.20 |
The results show that Bielik-11B-v2.1-Instruct performs well on the MixEval benchmark, achieving a score of 74.55 on the standard MixEval and 45.00 on MixEval-Hard. Notably, Bielik-11B-v2.1-Instruct significantly outperforms Mistral-7B-Instruct-v0.2 on both metrics, demonstrating its improved capabilities despite being based on a similar architecture.
### Chat Arena PL
Chat Arena PL is a human-evaluated benchmark that provides a direct comparison of model performance through head-to-head battles. Unlike the automated benchmarks mentioned above, this evaluation relies on human judgment to assess the quality and effectiveness of model responses. The results offer valuable insights into how different models perform in real-world, conversational scenarios as perceived by human evaluators.
Results accessed on 2024-08-26.
| # | Model | Battles | Won | Lost | Draws | Win % | ELO |
|---|-------|-------|---------|-----------|--------|-------------|-----|
| 1 | Bielik-11B-v2.2-Instruct | 92 | 72 | 14 | 6 | 83.72% | 1234 |
| 2 | **Bielik-11B-v2.1-Instruct** | 240 | 171 | 50 | 19 | **77.38%** | 1174 |
| 3 | gpt-4o-mini | 639 | 402 | 117 | 120 | 77.46% | 1141 |
| 4 | Mistral Large 2 (2024-07) | 324 | 188 | 69 | 67 | 73.15% | 1125 |
| 5 | Llama-3.1-405B | 548 | 297 | 144 | 107 | 67.35% | 1090 |
| 6 | Bielik-11B-v2.0-Instruct | 1289 | 695 | 352 | 242 | 66.38% | 1059 |
| 7 | Llama-3.1-70B | 498 | 221 | 187 | 90 | 54.17% | 1033 |
| 8 | Bielik-1-7B | 2041 | 1029 | 638 | 374 | 61.73% | 1020 |
| 9 | Mixtral-8x22B-v0.1 | 432 | 166 | 167 | 99 | 49.85% | 1018 |
| 10 | Qwen2-72B | 451 | 179 | 177 | 95 | 50.28% | 1011 |
| 11 | gpt-3.5-turbo | 2186 | 1007 | 731 | 448 | 57.94% | 1008 |
| 12 | Llama-3.1-8B | 440 | 155 | 227 | 58 | 40.58% | 975 |
| 13 | Mixtral-8x7B-v0.1 | 1997 | 794 | 804 | 399 | 49.69% | 973 |
| 14 | Llama-3-70b | 2008 | 733 | 909 | 366 | 44.64% | 956 |
| 15 | Mistral Nemo (2024-07) | 301 | 84 | 164 | 53 | 33.87% | 954 |
| 16 | Llama-3-8b | 1911 | 473 | 1091 | 347 | 30.24% | 909 |
| 17 | gemma-7b-it | 1928 | 418 | 1221 | 289 | 25.5% | 888 |
The results show that Bielik-11B-v2.1-Instruct outperforms almost all other models in this benchmark. This impressive performance demonstrates its effectiveness in real-world conversational scenarios, as judged by human evaluators.
## Limitations and Biases
Bielik-11B-v2.1-Instruct is a quick demonstration that the base model can be easily fine-tuned to achieve compelling and promising performance. It does not have any moderation mechanisms. We're looking forward to engaging with the community in ways to make the model respect guardrails, allowing for deployment in environments requiring moderated outputs.
Bielik-11B-v2.1-Instruct can produce factually incorrect output, and should not be relied on to produce factually accurate data. Bielik-11B-v2.1-Instruct was trained on various public datasets. While great efforts have been taken to clear the training data, it is possible that this model can generate lewd, false, biased or otherwise offensive outputs.
## Citation
Please cite this model using the following format:
```
@misc{Bielik11Bv21i,
title = {Bielik-11B-v2.1-Instruct model card},
author = {Ociepa, Krzysztof and Flis, Łukasz and Kinas, Remigiusz and Gwoździej, Adrian and Wróbel, Krzysztof and {SpeakLeash Team} and {Cyfronet Team}},
year = {2024},
url = {https://huggingface.co/speakleash/Bielik-11B-v2.1-Instruct},
note = {Accessed: 2024-09-10}, % change this date
urldate = {2024-09-10} % change this date
}
@unpublished{Bielik11Bv21a,
author = {Ociepa, Krzysztof and Flis, Łukasz and Kinas, Remigiusz and Gwoździej, Adrian and Wróbel, Krzysztof},
title = {Bielik: A Family of Large Language Models for the Polish Language - Development, Insights, and Evaluation},
year = {2024},
}
```
## Responsible for training the model
* [Krzysztof Ociepa](https://www.linkedin.com/in/krzysztof-ociepa-44886550/)<sup>SpeakLeash</sup> - team leadership, conceptualizing, data preparation, process optimization and oversight of training
* [Łukasz Flis](https://www.linkedin.com/in/lukasz-flis-0a39631/)<sup>Cyfronet AGH</sup> - coordinating and supervising the training
* [Remigiusz Kinas](https://www.linkedin.com/in/remigiusz-kinas/)<sup>SpeakLeash</sup> - conceptualizing and coordinating DPO training, data preparation
* [Adrian Gwoździej](https://www.linkedin.com/in/adrgwo/)<sup>SpeakLeash</sup> - data preparation and ensuring data quality
* [Krzysztof Wróbel](https://www.linkedin.com/in/wrobelkrzysztof/)<sup>SpeakLeash</sup> - benchmarks
The model could not have been created without the commitment and work of the entire SpeakLeash team, whose contribution is invaluable. Thanks to the hard work of many individuals, it was possible to gather a large amount of content in Polish and establish collaboration between the open-science SpeakLeash project and the HPC center: ACK Cyfronet AGH. Individuals who contributed to the creation of the model:
[Sebastian Kondracki](https://www.linkedin.com/in/sebastian-kondracki/),
[Igor Ciuciura](https://www.linkedin.com/in/igor-ciuciura-1763b52a6/),
[Paweł Kiszczak](https://www.linkedin.com/in/paveu-kiszczak/),
[Szymon Baczyński](https://www.linkedin.com/in/szymon-baczynski/),
[Jacek Chwiła](https://www.linkedin.com/in/jacek-chwila/),
[Maria Filipkowska](https://www.linkedin.com/in/maria-filipkowska/),
[Jan Maria Kowalski](https://www.linkedin.com/in/janmariakowalski/),
[Karol Jezierski](https://www.linkedin.com/in/karol-jezierski/),
[Kacper Milan](https://www.linkedin.com/in/kacper-milan/),
[Jan Sowa](https://www.linkedin.com/in/janpiotrsowa/),
[Len Krawczyk](https://www.linkedin.com/in/magdalena-krawczyk-7810942ab/),
[Marta Seidler](https://www.linkedin.com/in/marta-seidler-751102259/),
[Agnieszka Ratajska](https://www.linkedin.com/in/agnieszka-ratajska/),
[Krzysztof Koziarek](https://www.linkedin.com/in/krzysztofkoziarek/),
[Szymon Pepliński](http://linkedin.com/in/szymonpeplinski/),
[Zuzanna Dabić](https://www.linkedin.com/in/zuzanna-dabic/),
[Filip Bogacz](https://linkedin.com/in/Fibogacci),
[Agnieszka Kosiak](https://www.linkedin.com/in/agn-kosiak),
[Izabela Babis](https://www.linkedin.com/in/izabela-babis-2274b8105/),
[Nina Babis](https://www.linkedin.com/in/nina-babis-00055a140/).
Members of the ACK Cyfronet AGH team providing valuable support and expertise:
[Szymon Mazurek](https://www.linkedin.com/in/sz-mazurek-ai/),
[Marek Magryś](https://www.linkedin.com/in/magrys/),
[Mieszko Cholewa ](https://www.linkedin.com/in/mieszko-cholewa-613726301/).
## Contact Us
If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, join our [Discord SpeakLeash](https://discord.gg/pv4brQMDTy).
|
naver-ai/rope_mixed_deit_large_patch16_LS
|
naver-ai
| 2024-10-18T08:17:32Z | 7 | 0 | null |
[
"pytorch",
"image-classification",
"dataset:ILSVRC/imagenet-1k",
"arxiv:2403.13298",
"license:bsd-3-clause",
"region:us"
] |
image-classification
| 2024-10-15T09:21:05Z |
---
license: bsd-3-clause
datasets:
- ILSVRC/imagenet-1k
pipeline_tag: image-classification
---
# Model Card
<!-- Provide a quick summary of what the model is/does. -->
ImageNet-1k DeiT-iii pre-trained model with Rotary Position Embedding
## Rotary Position Embedding for Vision Transformer [ECCV 2024]
- **Repository:** https://github.com/naver-ai/rope-vit
- **Paper:** https://arxiv.org/abs/2403.13298
## Citation
```
@inproceedings{heo2024ropevit,
title={Rotary Position Embedding for Vision Transformer},
author={Heo, Byeongho and Park, Song and Han, Dongyoon and Yun, Sangdoo},
year={2024},
booktitle={European Conference on Computer Vision (ECCV)},
}
```
|
naver-ai/rope_mixed_ape_deit_small_patch16_LS
|
naver-ai
| 2024-10-18T08:17:21Z | 5 | 0 | null |
[
"pytorch",
"image-classification",
"dataset:ILSVRC/imagenet-1k",
"arxiv:2403.13298",
"license:bsd-3-clause",
"region:us"
] |
image-classification
| 2024-10-15T09:29:21Z |
---
license: bsd-3-clause
datasets:
- ILSVRC/imagenet-1k
pipeline_tag: image-classification
---
# Model Card
<!-- Provide a quick summary of what the model is/does. -->
ImageNet-1k DeiT-iii pre-trained model with Rotary Position Embedding
## Rotary Position Embedding for Vision Transformer [ECCV 2024]
- **Repository:** https://github.com/naver-ai/rope-vit
- **Paper:** https://arxiv.org/abs/2403.13298
## Citation
```
@inproceedings{heo2024ropevit,
title={Rotary Position Embedding for Vision Transformer},
author={Heo, Byeongho and Park, Song and Han, Dongyoon and Yun, Sangdoo},
year={2024},
booktitle={European Conference on Computer Vision (ECCV)},
}
```
|
naver-ai/rope_mixed_deit_base_patch16_LS
|
naver-ai
| 2024-10-18T08:15:46Z | 5 | 0 | null |
[
"pytorch",
"image-classification",
"dataset:ILSVRC/imagenet-1k",
"arxiv:2403.13298",
"license:bsd-3-clause",
"region:us"
] |
image-classification
| 2024-10-15T09:20:30Z |
---
license: bsd-3-clause
datasets:
- ILSVRC/imagenet-1k
pipeline_tag: image-classification
---
# Model Card
<!-- Provide a quick summary of what the model is/does. -->
ImageNet-1k DeiT-iii pre-trained model with Rotary Position Embedding
## Rotary Position Embedding for Vision Transformer [ECCV 2024]
- **Repository:** https://github.com/naver-ai/rope-vit
- **Paper:** https://arxiv.org/abs/2403.13298
## Citation
```
@inproceedings{heo2024ropevit,
title={Rotary Position Embedding for Vision Transformer},
author={Heo, Byeongho and Park, Song and Han, Dongyoon and Yun, Sangdoo},
year={2024},
booktitle={European Conference on Computer Vision (ECCV)},
}
```
|
cuongdev/vtthuc2
|
cuongdev
| 2024-10-18T08:12:33Z | 29 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-10-18T08:08:33Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### vtThuc2 Dreambooth model trained by cuongdev with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
eddey/bert-finetuned-ner
|
eddey
| 2024-10-18T08:11:13Z | 107 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-10-18T07:08:32Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9317693705600528
- name: Recall
type: recall
value: 0.9491753618310333
- name: F1
type: f1
value: 0.9403918299291371
- name: Accuracy
type: accuracy
value: 0.9856949431918526
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0634
- Precision: 0.9318
- Recall: 0.9492
- F1: 0.9404
- Accuracy: 0.9857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0763 | 1.0 | 1756 | 0.0649 | 0.9075 | 0.9345 | 0.9208 | 0.9828 |
| 0.0348 | 2.0 | 3512 | 0.0689 | 0.9281 | 0.9424 | 0.9352 | 0.9842 |
| 0.0235 | 3.0 | 5268 | 0.0634 | 0.9318 | 0.9492 | 0.9404 | 0.9857 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.4.1+cpu
- Datasets 3.0.1
- Tokenizers 0.20.1
|
enochlev/XLM-CEBinary-VMO2-mini-2
|
enochlev
| 2024-10-18T08:09:40Z | 107 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"cross-encoder",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-18T08:08:56Z |
---
library_name: transformers
tags:
- cross-encoder
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
spneshaei/reflectium_and_mindmate_allfolds
|
spneshaei
| 2024-10-18T08:07:48Z | 109 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-18T08:07:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dyryu/klue_roberta-base-klue-sts
|
dyryu
| 2024-10-18T08:06:16Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-10-18T08:05:41Z |
---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 657 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
mradermacher/Phi-3.5-mini-TitanFusion-0.3-GGUF
|
mradermacher
| 2024-10-18T08:02:08Z | 7 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:bunnycore/Phi-3.5-mini-TitanFusion-0.3",
"base_model:quantized:bunnycore/Phi-3.5-mini-TitanFusion-0.3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-18T07:09:10Z |
---
base_model: bunnycore/Phi-3.5-mini-TitanFusion-0.3
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/bunnycore/Phi-3.5-mini-TitanFusion-0.3
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.3-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.3-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.3.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.3-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.3.Q3_K_S.gguf) | Q3_K_S | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.3-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.3.Q3_K_M.gguf) | Q3_K_M | 2.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.3-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.3.IQ4_XS.gguf) | IQ4_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.3-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.3.Q3_K_L.gguf) | Q3_K_L | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.3-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.3.Q4_K_S.gguf) | Q4_K_S | 2.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.3-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.3.Q4_K_M.gguf) | Q4_K_M | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.3-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.3.Q5_K_S.gguf) | Q5_K_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.3-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.3.Q5_K_M.gguf) | Q5_K_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.3-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.3.Q6_K.gguf) | Q6_K | 3.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.3-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.3.Q8_0.gguf) | Q8_0 | 4.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.3-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.3.f16.gguf) | f16 | 7.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mlx-community/Llama-3.2-11B-Vision-Instruct-4bit
|
mlx-community
| 2024-10-18T07:59:18Z | 681 | 5 |
transformers
|
[
"transformers",
"safetensors",
"mllama",
"image-text-to-text",
"llama-3",
"llama",
"meta",
"facebook",
"unsloth",
"mlx",
"conversational",
"en",
"base_model:meta-llama/Llama-3.2-11B-Vision-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-11B-Vision-Instruct",
"license:llama3.2",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-10-18T07:22:41Z |
---
base_model: meta-llama/Llama-3.2-11B-Vision-Instruct
language:
- en
library_name: transformers
license: llama3.2
tags:
- llama-3
- llama
- meta
- facebook
- unsloth
- transformers
- mlx
---
# mlx-community/Llama-3.2-11B-Vision-Instruct-4bit
This model was converted to MLX format from [`unsloth/Llama-3.2-11B-Vision-Instruct`]() using mlx-vlm version **0.1.0**.
Refer to the [original model card](https://huggingface.co/unsloth/Llama-3.2-11B-Vision-Instruct) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model mlx-community/Llama-3.2-11B-Vision-Instruct-4bit --max-tokens 100 --temp 0.0
```
|
danish-foundation-models/munin-7b-alpha-Q8_0-GGUF
|
danish-foundation-models
| 2024-10-18T07:58:06Z | 13 | 0 | null |
[
"gguf",
"pretrained",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"da",
"dataset:DDSC/partial-danish-gigaword-no-twitter",
"base_model:danish-foundation-models/munin-7b-alpha",
"base_model:quantized:danish-foundation-models/munin-7b-alpha",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-18T07:57:30Z |
---
base_model: danish-foundation-models/munin-7b-alpha
datasets:
- DDSC/partial-danish-gigaword-no-twitter
language:
- da
license: apache-2.0
pipeline_tag: text-generation
tags:
- pretrained
- llama-cpp
- gguf-my-repo
inference:
parameters:
temperature: 0.7
---
# saattrupdan/munin-7b-alpha-Q8_0-GGUF
This model was converted to GGUF format from [`danish-foundation-models/munin-7b-alpha`](https://huggingface.co/danish-foundation-models/munin-7b-alpha) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/danish-foundation-models/munin-7b-alpha) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo saattrupdan/munin-7b-alpha-Q8_0-GGUF --hf-file munin-7b-alpha-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo saattrupdan/munin-7b-alpha-Q8_0-GGUF --hf-file munin-7b-alpha-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo saattrupdan/munin-7b-alpha-Q8_0-GGUF --hf-file munin-7b-alpha-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo saattrupdan/munin-7b-alpha-Q8_0-GGUF --hf-file munin-7b-alpha-q8_0.gguf -c 2048
```
|
sheldonrobinson/Aria-sequential_mlp
|
sheldonrobinson
| 2024-10-18T07:45:16Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"aria",
"image-text-to-text",
"multimodal",
"conversational",
"custom_code",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-11-04T18:15:19Z |
---
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: image-text-to-text
tags:
- multimodal
- aria
---
<!-- <p align="center">
<br>Aria</br>
</p> -->
This is a fork of the [rhymes-ai/Aria](https://huggingface.co/rhymes-ai/Aria) model. The only modification is replacing [grouped GEMM](https://github.com/tgale96/grouped_gemm) with a sequential MLP. In this configuration, each expert is implemented as a `torch.nn.Linear` layer executed in sequence. This adjustment simplifies quantization with current open-source libraries, which are optimized for `nn.Linear` layers.
While the sequential MLP approach aids in easier quantization, using grouped GEMM provides the advantage of faster inference speed.
## Quick Start
### Installation
```
pip install transformers==4.45.0 accelerate==0.34.1 sentencepiece==0.2.0 torchvision requests torch Pillow
pip install flash-attn --no-build-isolation
```
### Inference
```python
import requests
import torch
from PIL import Image
from transformers import AutoModelForCausalLM, AutoProcessor
model_id_or_path = "rhymes-ai/Aria-sequential_mlp"
model = AutoModelForCausalLM.from_pretrained(model_id_or_path, device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True)
processor = AutoProcessor.from_pretrained(model_id_or_path, trust_remote_code=True)
image_path = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png"
image = Image.open(requests.get(image_path, stream=True).raw)
messages = [
{
"role": "user",
"content": [
{"text": None, "type": "image"},
{"text": "what is the image?", "type": "text"},
],
}
]
text = processor.apply_chat_template(messages, add_generation_prompt=True)
inputs = processor(text=text, images=image, return_tensors="pt")
inputs["pixel_values"] = inputs["pixel_values"].to(model.dtype)
inputs = {k: v.to(model.device) for k, v in inputs.items()}
with torch.inference_mode(), torch.cuda.amp.autocast(dtype=torch.bfloat16):
output = model.generate(
**inputs,
max_new_tokens=500,
stop_strings=["<|im_end|>"],
tokenizer=processor.tokenizer,
do_sample=True,
temperature=0.9,
)
output_ids = output[0][inputs["input_ids"].shape[1]:]
result = processor.decode(output_ids, skip_special_tokens=True)
print(result)
```
|
chuxin-llm/Chuxin-Embedding
|
chuxin-llm
| 2024-10-18T07:40:29Z | 1,751 | 9 | null |
[
"safetensors",
"xlm-roberta",
"mteb",
"zh",
"model-index",
"region:us"
] | null | 2024-09-02T02:16:28Z |
---
language:
- zh
model-index:
- name: Chuxin-Embedding
results:
- dataset:
config: default
name: MTEB CmedqaRetrieval (default)
revision: cd540c506dae1cf9e9a59c3e06f42030d54e7301
split: dev
type: C-MTEB/CmedqaRetrieval
metrics:
- type: map_at_1
value: 33.391999999999996
- type: map_at_10
value: 48.715
- type: map_at_100
value: 50.381
- type: map_at_1000
value: 50.456
- type: map_at_3
value: 43.708999999999996
- type: map_at_5
value: 46.405
- type: mrr_at_1
value: 48.612
- type: mrr_at_10
value: 58.67099999999999
- type: mrr_at_100
value: 59.38
- type: mrr_at_1000
value: 59.396
- type: mrr_at_3
value: 55.906
- type: mrr_at_5
value: 57.421
- type: ndcg_at_1
value: 48.612
- type: ndcg_at_10
value: 56.581
- type: ndcg_at_100
value: 62.422999999999995
- type: ndcg_at_1000
value: 63.476
- type: ndcg_at_3
value: 50.271
- type: ndcg_at_5
value: 52.79899999999999
- type: precision_at_1
value: 48.612
- type: precision_at_10
value: 11.995000000000001
- type: precision_at_100
value: 1.696
- type: precision_at_1000
value: 0.185
- type: precision_at_3
value: 27.465
- type: precision_at_5
value: 19.675
- type: recall_at_1
value: 33.391999999999996
- type: recall_at_10
value: 69.87100000000001
- type: recall_at_100
value: 93.078
- type: recall_at_1000
value: 99.55199999999999
- type: recall_at_3
value: 50.939
- type: recall_at_5
value: 58.714
- type: main_score
value: 56.581
task:
type: Retrieval
- dataset:
config: default
name: MTEB CovidRetrieval (default)
revision: 1271c7809071a13532e05f25fb53511ffce77117
split: dev
type: C-MTEB/CovidRetrieval
metrics:
- type: map_at_1
value: 71.918
- type: map_at_10
value: 80.609
- type: map_at_100
value: 80.796
- type: map_at_1000
value: 80.798
- type: map_at_3
value: 79.224
- type: map_at_5
value: 79.96
- type: mrr_at_1
value: 72.076
- type: mrr_at_10
value: 80.61399999999999
- type: mrr_at_100
value: 80.801
- type: mrr_at_1000
value: 80.803
- type: mrr_at_3
value: 79.276
- type: mrr_at_5
value: 80.025
- type: ndcg_at_1
value: 72.076
- type: ndcg_at_10
value: 84.286
- type: ndcg_at_100
value: 85.14500000000001
- type: ndcg_at_1000
value: 85.21
- type: ndcg_at_3
value: 81.45400000000001
- type: ndcg_at_5
value: 82.781
- type: precision_at_1
value: 72.076
- type: precision_at_10
value: 9.663
- type: precision_at_100
value: 1.005
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 29.398999999999997
- type: precision_at_5
value: 18.335
- type: recall_at_1
value: 71.918
- type: recall_at_10
value: 95.574
- type: recall_at_100
value: 99.473
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 87.82900000000001
- type: recall_at_5
value: 90.991
- type: main_score
value: 84.286
task:
type: Retrieval
- dataset:
config: default
name: MTEB DuRetrieval (default)
revision: a1a333e290fe30b10f3f56498e3a0d911a693ced
split: dev
type: C-MTEB/DuRetrieval
metrics:
- type: map_at_1
value: 25.019999999999996
- type: map_at_10
value: 77.744
- type: map_at_100
value: 80.562
- type: map_at_1000
value: 80.60300000000001
- type: map_at_3
value: 52.642999999999994
- type: map_at_5
value: 67.179
- type: mrr_at_1
value: 86.5
- type: mrr_at_10
value: 91.024
- type: mrr_at_100
value: 91.09
- type: mrr_at_1000
value: 91.093
- type: mrr_at_3
value: 90.558
- type: mrr_at_5
value: 90.913
- type: ndcg_at_1
value: 86.5
- type: ndcg_at_10
value: 85.651
- type: ndcg_at_100
value: 88.504
- type: ndcg_at_1000
value: 88.887
- type: ndcg_at_3
value: 82.707
- type: ndcg_at_5
value: 82.596
- type: precision_at_1
value: 86.5
- type: precision_at_10
value: 41.595
- type: precision_at_100
value: 4.7940000000000005
- type: precision_at_1000
value: 0.48900000000000005
- type: precision_at_3
value: 74.233
- type: precision_at_5
value: 63.68000000000001
- type: recall_at_1
value: 25.019999999999996
- type: recall_at_10
value: 88.114
- type: recall_at_100
value: 97.442
- type: recall_at_1000
value: 99.39099999999999
- type: recall_at_3
value: 55.397
- type: recall_at_5
value: 73.095
- type: main_score
value: 85.651
task:
type: Retrieval
- dataset:
config: default
name: MTEB EcomRetrieval (default)
revision: 687de13dc7294d6fd9be10c6945f9e8fec8166b9
split: dev
type: C-MTEB/EcomRetrieval
metrics:
- type: map_at_1
value: 55.60000000000001
- type: map_at_10
value: 67.891
- type: map_at_100
value: 68.28699999999999
- type: map_at_1000
value: 68.28699999999999
- type: map_at_3
value: 64.86699999999999
- type: map_at_5
value: 66.652
- type: mrr_at_1
value: 55.60000000000001
- type: mrr_at_10
value: 67.891
- type: mrr_at_100
value: 68.28699999999999
- type: mrr_at_1000
value: 68.28699999999999
- type: mrr_at_3
value: 64.86699999999999
- type: mrr_at_5
value: 66.652
- type: ndcg_at_1
value: 55.60000000000001
- type: ndcg_at_10
value: 74.01100000000001
- type: ndcg_at_100
value: 75.602
- type: ndcg_at_1000
value: 75.602
- type: ndcg_at_3
value: 67.833
- type: ndcg_at_5
value: 71.005
- type: precision_at_1
value: 55.60000000000001
- type: precision_at_10
value: 9.33
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 25.467000000000002
- type: precision_at_5
value: 16.8
- type: recall_at_1
value: 55.60000000000001
- type: recall_at_10
value: 93.30000000000001
- type: recall_at_100
value: 100.0
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 76.4
- type: recall_at_5
value: 84.0
- type: main_score
value: 74.01100000000001
task:
type: Retrieval
- dataset:
config: default
name: MTEB MMarcoRetrieval (default)
revision: 539bbde593d947e2a124ba72651aafc09eb33fc2
split: dev
type: C-MTEB/MMarcoRetrieval
metrics:
- type: map_at_1
value: 66.24799999999999
- type: map_at_10
value: 75.356
- type: map_at_100
value: 75.653
- type: map_at_1000
value: 75.664
- type: map_at_3
value: 73.515
- type: map_at_5
value: 74.67099999999999
- type: mrr_at_1
value: 68.496
- type: mrr_at_10
value: 75.91499999999999
- type: mrr_at_100
value: 76.17399999999999
- type: mrr_at_1000
value: 76.184
- type: mrr_at_3
value: 74.315
- type: mrr_at_5
value: 75.313
- type: ndcg_at_1
value: 68.496
- type: ndcg_at_10
value: 79.065
- type: ndcg_at_100
value: 80.417
- type: ndcg_at_1000
value: 80.72399999999999
- type: ndcg_at_3
value: 75.551
- type: ndcg_at_5
value: 77.505
- type: precision_at_1
value: 68.496
- type: precision_at_10
value: 9.563
- type: precision_at_100
value: 1.024
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 28.391
- type: precision_at_5
value: 18.086
- type: recall_at_1
value: 66.24799999999999
- type: recall_at_10
value: 89.97
- type: recall_at_100
value: 96.13199999999999
- type: recall_at_1000
value: 98.551
- type: recall_at_3
value: 80.624
- type: recall_at_5
value: 85.271
- type: main_score
value: 79.065
task:
type: Retrieval
- dataset:
config: default
name: MTEB MedicalRetrieval (default)
revision: 2039188fb5800a9803ba5048df7b76e6fb151fc6
split: dev
type: C-MTEB/MedicalRetrieval
metrics:
- type: map_at_1
value: 61.8
- type: map_at_10
value: 71.101
- type: map_at_100
value: 71.576
- type: map_at_1000
value: 71.583
- type: map_at_3
value: 68.867
- type: map_at_5
value: 70.272
- type: mrr_at_1
value: 61.9
- type: mrr_at_10
value: 71.158
- type: mrr_at_100
value: 71.625
- type: mrr_at_1000
value: 71.631
- type: mrr_at_3
value: 68.917
- type: mrr_at_5
value: 70.317
- type: ndcg_at_1
value: 61.8
- type: ndcg_at_10
value: 75.624
- type: ndcg_at_100
value: 77.702
- type: ndcg_at_1000
value: 77.836
- type: ndcg_at_3
value: 71.114
- type: ndcg_at_5
value: 73.636
- type: precision_at_1
value: 61.8
- type: precision_at_10
value: 8.98
- type: precision_at_100
value: 0.9900000000000001
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 25.867
- type: precision_at_5
value: 16.74
- type: recall_at_1
value: 61.8
- type: recall_at_10
value: 89.8
- type: recall_at_100
value: 99.0
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 77.60000000000001
- type: recall_at_5
value: 83.7
- type: main_score
value: 75.624
task:
type: Retrieval
- dataset:
config: default
name: MTEB T2Retrieval (default)
revision: 8731a845f1bf500a4f111cf1070785c793d10e64
split: dev
type: C-MTEB/T2Retrieval
metrics:
- type: map_at_1
value: 27.173000000000002
- type: map_at_10
value: 76.454
- type: map_at_100
value: 80.021
- type: map_at_1000
value: 80.092
- type: map_at_3
value: 53.876999999999995
- type: map_at_5
value: 66.122
- type: mrr_at_1
value: 89.519
- type: mrr_at_10
value: 92.091
- type: mrr_at_100
value: 92.179
- type: mrr_at_1000
value: 92.183
- type: mrr_at_3
value: 91.655
- type: mrr_at_5
value: 91.94
- type: ndcg_at_1
value: 89.519
- type: ndcg_at_10
value: 84.043
- type: ndcg_at_100
value: 87.60900000000001
- type: ndcg_at_1000
value: 88.32799999999999
- type: ndcg_at_3
value: 85.623
- type: ndcg_at_5
value: 84.111
- type: precision_at_1
value: 89.519
- type: precision_at_10
value: 41.760000000000005
- type: precision_at_100
value: 4.982
- type: precision_at_1000
value: 0.515
- type: precision_at_3
value: 74.944
- type: precision_at_5
value: 62.705999999999996
- type: recall_at_1
value: 27.173000000000002
- type: recall_at_10
value: 82.878
- type: recall_at_100
value: 94.527
- type: recall_at_1000
value: 98.24199999999999
- type: recall_at_3
value: 55.589
- type: recall_at_5
value: 69.476
- type: main_score
value: 84.043
task:
type: Retrieval
- dataset:
config: default
name: MTEB VideoRetrieval (default)
revision: 58c2597a5943a2ba48f4668c3b90d796283c5639
split: dev
type: C-MTEB/VideoRetrieval
metrics:
- type: map_at_1
value: 70.1
- type: map_at_10
value: 79.62
- type: map_at_100
value: 79.804
- type: map_at_1000
value: 79.804
- type: map_at_3
value: 77.81700000000001
- type: map_at_5
value: 79.037
- type: mrr_at_1
value: 70.1
- type: mrr_at_10
value: 79.62
- type: mrr_at_100
value: 79.804
- type: mrr_at_1000
value: 79.804
- type: mrr_at_3
value: 77.81700000000001
- type: mrr_at_5
value: 79.037
- type: ndcg_at_1
value: 70.1
- type: ndcg_at_10
value: 83.83500000000001
- type: ndcg_at_100
value: 84.584
- type: ndcg_at_1000
value: 84.584
- type: ndcg_at_3
value: 80.282
- type: ndcg_at_5
value: 82.472
- type: precision_at_1
value: 70.1
- type: precision_at_10
value: 9.68
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 29.133
- type: precision_at_5
value: 18.54
- type: recall_at_1
value: 70.1
- type: recall_at_10
value: 96.8
- type: recall_at_100
value: 100.0
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 87.4
- type: recall_at_5
value: 92.7
- type: main_score
value: 83.83500000000001
task:
type: Retrieval
tags:
- mteb
---
# Chuxin-Embedding
<!-- Provide a quick summary of what the model is/does. -->
Chuxin-Embedding 是专为增强中文文本检索能力而设计的嵌入模型。它基于bge-m3-retromae[1],实现了预训练、微调、精调全流程。该模型在来自各个领域的大量语料库上进行训练,语料库的批量非常大。截至 2024 年 9 月 14 日, Chuxin-Embedding 在检索任务中表现出色,在 C-MTEB 中文检索排行榜上排名第一,领先的性能得分为 77.88,在AIR-Bench中文检索+重排序公开排行榜上排名第一,领先的性能得分为 64.78。
Chuxin-Embedding is a specially designed embedding model aimed at enhancing the capability of Chinese text retrieval. It is based on bge-m3-retromae[1] and implements the entire process of pre-training, fine-tuning, and refining. This model has been trained on a vast amount of corpora from various fields. As of September 14, 2024, Chuxin-Embedding has shown outstanding performance in retrieval tasks. It ranks first on the C-MTEB Chinese Retrieval Leaderboard with a leading performance score of 77.88 and also ranks first on the AIR-Bench Chinese Retrieval + Re-ranking Public Leaderboard with a leading performance score of 64.78.
## News
- 2024/10/18: LLM生成及数据清洗 [Code](https://github.com/chuxin-llm/Chuxin-Embedding/blob/main/README_LLM.md) 。
- 2024/9/14: 团队的RAG框架欢迎试用 [ragnify](https://github.com/chuxin-llm/ragnify) 。
- 2024/9/14: LLM generation and data clean [Code](https://github.com/chuxin-llm/Chuxin-Embedding) .
- 2024/9/14: The team's RAG framework is available for trial [ragnify](https://github.com/chuxin-llm/ragnify) .
## Training Details

基于bge-m3-retromae[1],主要改动如下:
<!-- Provide a longer summary of what this model is. -->
- 基于bge-m3-retromae[1]在亿级数据上预训练。
- 使用BGE pretrain [Code](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/pretrain) 完成预训练。
- 在收集的公开亿级检索数据集上实现了微调。
- 使用BGE finetune [Code](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) 完成微调。
- 在收集的公开百万级检索数据集和百万级LLM合成数据集上实现了精调。
- 使用BGE finetune [Code](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) 和 BGE unified_finetune [Code](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/unified_finetune) 完成精调。
- 通过 LLM (QWEN-72B) 进行数据生成,使用 LLM 为message生成新query
- 数据清洗:
- 简单的基于规则清洗
- LLM判断是否可作为搜索引擎查询的query
- rerank模型对(query,message)评分,舍弃pos中的负例,neg中的正例
Based on bge-m3-retromae[1], the main modifications are as follows:
- Pre-trained on a billion-level dataset based on bge-m3-retromae[1].
- Pre-training is completed using BGE pretrain [Code](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/pretrain) .
- Fine-tuned on a publicly collected billion-level retrieval dataset.
- Fine-tuning is completed using BGE finetune [Code](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune).
- Refined on a publicly collected million-level retrieval dataset and a million-level LLM synthetic dataset.
- Refining is completed using BGE finetune [Code](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) and BGE unified_finetune [Code](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/unified_finetune).
- Data generation is performed through LLM (QWEN-72B), using LLM to generate new query for messages.
- Data cleaning:
- Simple rule-based cleaning
- LLM to determine whether a query can be used as a search engine query
- The rerank model scores (query, message) pairs, discarding negative examples in the positive set and positive examples in the negative set.
## Collect more data for retrieval-type tasks
1. 预训练数据
- ChineseWebText、 oasis、 oscar、 SkyPile、 wudao
2. 微调数据
- MTP 、webqa、nlpcc、csl、bq、atec、ccks
3. 精调数据
- BGE-M3 、Huatuo26M-Lite 、covid ...
- LLM 合成(BGE-M3 、Huatuo26M-Lite 、covid、wudao、wanjuan_news、mnbvc_news_wiki、mldr、medical QA...)
## Performance
**C_MTEB RETRIEVAL**
| Model | **Average** | **CmedqaRetrieval** | **CovidRetrieval** | **DuRetrieval** | **EcomRetrieval** | **MedicalRetrieval** | **MMarcoRetrieval** | **T2Retrieval** | **VideoRetrieval** |
| :-------------------: | :---------: | :-------: | :------------: | :-----------: | :-----------: | :-------: | :----------: | :-------: | :----------: |
| Zhihui_LLM_Embedding | 76.74 | 48.69 | 84.39 | 91.34 | 71.96 | 65.19 | 84.77 |88.3 | 79.31 |
| zpoint_large_embedding_zh | 76.36 | 47.16 | 89.14 | 89.23 | 70.74 | 68.14 | 82.38 | 83.81 | 80.26 |
| **Chuxin-Embedding** | **77.88** | 56.58 | 84.28 | 85.65 | 74.01 | 75.62 | 79.06 | 84.04 | 83.84 |
**AIR-Bench**
| Retrieval Method | Reranking Model | **Average** | **wiki_zh** | **web_zh** | **news_zh** | **healthcare_zh** | **finance_zh** |
| :-------------------: | :---------:| :---------: | :-------: | :------------: | :-----------: | :-----------: | :----------: |
| bge-m3 | bge-reranker-large | 64.53 | 76.11 | 67.8 | 63.25 | 62.9 | 52.61 |
| gte-Qwen2-7B-instruct |bge-reranker-large | 63.39 | 78.09 | 67.56 | 63.14 | 61.12 | 47.02 |
| **Chuxin-Embedding** | bge-reranker-large | **64.78** |76.23 | 68.44 | 64.2 | 62.93 | 52.11 |
## Generate Embedding for text
```python
#pip install -U FlagEmbedding
from FlagEmbedding import FlagModel
model = FlagModel('chuxin-llm/Chuxin-Embedding',
query_instruction_for_retrieval="为这个句子生成表示以用于检索相关文章:",
use_fp16=True)
sentences_1 = ["样例数据-1", "样例数据-2"]
sentences_2 = ["样例数据-3", "样例数据-1"]
embeddings_1 = model.encode(sentences_1)
embeddings_2 = model.encode(sentences_2)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
```
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Reference
1. https://huggingface.co/BAAI/bge-m3-retromae
2. https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3
3. https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/baai_general_embedding
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
jw-hf-test/jw-14B-212
|
jw-hf-test
| 2024-10-18T07:39:16Z | 189 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-18T07:35:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Kort/i_1
|
Kort
| 2024-10-18T07:36:21Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-18T07:33:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Smashitup/FineLlama-3.1-8B-q8_0-GGUF
|
Smashitup
| 2024-10-18T07:34:23Z | 107 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-18T07:33:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Smashitup/FineLlama-3.1-8B-q6_k-GGUF
|
Smashitup
| 2024-10-18T07:33:54Z | 107 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-18T07:33:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Smashitup/FineLlama-3.1-8B-q4_k_m-GGUF
|
Smashitup
| 2024-10-18T07:33:04Z | 107 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-18T07:32:45Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Smashitup/FineLlama-3.1-8B-q3_k_m-GGUF
|
Smashitup
| 2024-10-18T07:32:40Z | 107 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-18T07:32:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Smashitup/FineLlama-3.1-8B-q2_k-GGUF
|
Smashitup
| 2024-10-18T07:32:14Z | 107 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-18T07:31:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
WANGTINGTING/finetuned-table-transformer-detection-v1
|
WANGTINGTING
| 2024-10-18T07:27:14Z | 191 | 0 |
transformers
|
[
"transformers",
"safetensors",
"table-transformer",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2024-10-04T07:21:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Llama-3.2-1B-Puredove-p-GGUF
|
mradermacher
| 2024-10-18T07:19:13Z | 16 | 1 |
transformers
|
[
"transformers",
"gguf",
"chat",
"en",
"dataset:LDJnr/Pure-Dove",
"base_model:ElMater06/Llama-3.2-1B-Puredove-p",
"base_model:quantized:ElMater06/Llama-3.2-1B-Puredove-p",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-18T07:15:25Z |
---
base_model: ElMater06/Llama-3.2-1B-Puredove-p
datasets:
- LDJnr/Pure-Dove
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- chat
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ElMater06/Llama-3.2-1B-Puredove-p
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-1B-Puredove-p-GGUF/resolve/main/Llama-3.2-1B-Puredove-p.Q2_K.gguf) | Q2_K | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-1B-Puredove-p-GGUF/resolve/main/Llama-3.2-1B-Puredove-p.Q3_K_S.gguf) | Q3_K_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-1B-Puredove-p-GGUF/resolve/main/Llama-3.2-1B-Puredove-p.Q3_K_M.gguf) | Q3_K_M | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-1B-Puredove-p-GGUF/resolve/main/Llama-3.2-1B-Puredove-p.Q3_K_L.gguf) | Q3_K_L | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-1B-Puredove-p-GGUF/resolve/main/Llama-3.2-1B-Puredove-p.IQ4_XS.gguf) | IQ4_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-1B-Puredove-p-GGUF/resolve/main/Llama-3.2-1B-Puredove-p.Q4_K_S.gguf) | Q4_K_S | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-1B-Puredove-p-GGUF/resolve/main/Llama-3.2-1B-Puredove-p.Q4_K_M.gguf) | Q4_K_M | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-1B-Puredove-p-GGUF/resolve/main/Llama-3.2-1B-Puredove-p.Q5_K_S.gguf) | Q5_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-1B-Puredove-p-GGUF/resolve/main/Llama-3.2-1B-Puredove-p.Q5_K_M.gguf) | Q5_K_M | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-1B-Puredove-p-GGUF/resolve/main/Llama-3.2-1B-Puredove-p.Q6_K.gguf) | Q6_K | 1.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-1B-Puredove-p-GGUF/resolve/main/Llama-3.2-1B-Puredove-p.Q8_0.gguf) | Q8_0 | 1.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-1B-Puredove-p-GGUF/resolve/main/Llama-3.2-1B-Puredove-p.f16.gguf) | f16 | 2.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
eligapris/v-mdd-2000-150
|
eligapris
| 2024-10-18T07:10:44Z | 237 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"resnet",
"image-classification",
"autotrain",
"eligapris",
"vision",
"en",
"base_model:microsoft/resnet-152",
"base_model:finetune:microsoft/resnet-152",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-10-18T06:24:24Z |
---
tags:
- autotrain
- image-classification
- eligapris
- vision
base_model: microsoft/resnet-152
license: apache-2.0
language:
- en
metrics:
- accuracy
pipeline_tag: image-classification
library_name: transformers
---
<!-- # Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metrics
loss: 0.369663268327713
f1_macro: 0.6843075887364951
f1_micro: 0.858508604206501
f1_weighted: 0.8303295709630173
precision_macro: 0.8204154992433914
precision_micro: 0.858508604206501
precision_weighted: 0.882137838723129
recall_macro: 0.7169578798305077
recall_micro: 0.858508604206501
recall_weighted: 0.858508604206501
accuracy: 0.858508604206501 -->
# Corn Leaf Disease Classification Model Analysis
## Dataset Breakdown
The dataset consists of four classes with the following distribution:
| Class | Number of Images |
|------------------------|-------------------|
| Healthy_Leaf | 3021 |
| Gray_Leaf_Spot | 2478 |
| Common_Rust | 2949 |
| Northern_Leaf_Blight | 3303 |
**Note:** There is a slight class imbalance, with Gray_Leaf_Spot having notably fewer images compared to the other classes.
## Model Performance Metrics
The model was trained using AutoTrain for image classification. Here's a breakdown of the validation metrics:
| Metric | Value |
|-----------------------|-----------|
| Loss | 0.3697 |
| Accuracy | 0.8585 |
| F1 (Macro) | 0.6843 |
| F1 (Micro) | 0.8585 |
| F1 (Weighted) | 0.8303 |
| Precision (Macro) | 0.8204 |
| Precision (Micro) | 0.8585 |
| Precision (Weighted) | 0.8821 |
| Recall (Macro) | 0.7170 |
| Recall (Micro) | 0.8585 |
| Recall (Weighted) | 0.8585 |
### Metric Explanations
1. **Loss (0.3697)**: This relatively low value indicates that the model is learning well.
2. **Accuracy (0.8585)**: The model correctly classifies 85.85% of all instances across all classes.
3. **F1 Score**:
- Macro (0.6843): The unweighted mean of F1 scores for each class.
- Micro (0.8585): Calculated globally by counting the total true positives, false negatives, and false positives.
- Weighted (0.8303): The weighted average of F1 scores for each class, accounting for class imbalance.
4. **Precision**:
- Macro (0.8204): The unweighted mean of precision scores for each class.
- Micro (0.8585): The global precision across all classes.
- Weighted (0.8821): The weighted average of precision scores for each class.
5. **Recall**:
- Macro (0.7170): The unweighted mean of recall scores for each class.
- Micro (0.8585): The global recall across all classes.
- Weighted (0.8585): The weighted average of recall scores for each class.
### Analysis
1. **Class Imbalance**: The difference between macro and micro scores suggests class imbalance, which aligns with our dataset breakdown. The Gray_Leaf_Spot class, having fewer images, likely contributes to this imbalance.
2. **Precision vs Recall**: Precision scores are generally higher than recall scores, especially for macro metrics. This suggests the model is more cautious in its predictions, preferring to be correct when it does predict a class.
3. **Performance on Majority vs Minority Classes**: The higher micro and weighted scores compared to macro scores indicate that the model performs better on more frequent classes. This is likely due to the class imbalance, with the model potentially struggling more with the Gray_Leaf_Spot class.
4. **Overall Performance**: With an accuracy of 85.85%, the model shows good overall performance. However, there's room for improvement, especially in handling the class imbalance.
|
QuantFactory/Gemma-2-Ataraxy-v4a-Advanced-9B-GGUF
|
QuantFactory
| 2024-10-18T07:06:48Z | 35 | 1 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"base_model:lemon07r/Gemma-2-Ataraxy-v3-Advanced-9B",
"base_model:merge:lemon07r/Gemma-2-Ataraxy-v3-Advanced-9B",
"base_model:zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25",
"base_model:merge:zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-18T06:07:04Z |
---
library_name: transformers
tags:
- mergekit
- merge
base_model:
- lemon07r/Gemma-2-Ataraxy-v3-Advanced-9B
- zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25
model-index:
- name: Gemma-2-Ataraxy-v4a-Advanced-9B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 71.35
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v4a-Advanced-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 42.74
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v4a-Advanced-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 2.19
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v4a-Advanced-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 12.53
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v4a-Advanced-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 15.18
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v4a-Advanced-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 36.77
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v4a-Advanced-9B
name: Open LLM Leaderboard
---
[](https://hf.co/QuantFactory)
# QuantFactory/Gemma-2-Ataraxy-v4a-Advanced-9B-GGUF
This is quantized version of [lemon07r/Gemma-2-Ataraxy-v4a-Advanced-9B](https://huggingface.co/lemon07r/Gemma-2-Ataraxy-v4a-Advanced-9B) created using llama.cpp
# Original Model Card
# Gemma-2-Ataraxy-v4a-Advanced-9B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [lemon07r/Gemma-2-Ataraxy-v3-Advanced-9B](https://huggingface.co/lemon07r/Gemma-2-Ataraxy-v3-Advanced-9B)
* [zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25](https://huggingface.co/zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: lemon07r/Gemma-2-Ataraxy-v3-Advanced-9B
dtype: bfloat16
merge_method: slerp
parameters:
t: 0.25
slices:
- sources:
- layer_range: [0, 42]
model: lemon07r/Gemma-2-Ataraxy-v3-Advanced-9B
- layer_range: [0, 42]
model: zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lemon07r__Gemma-2-Ataraxy-v4a-Advanced-9B)
| Metric |Value|
|-------------------|----:|
|Avg. |30.13|
|IFEval (0-Shot) |71.35|
|BBH (3-Shot) |42.74|
|MATH Lvl 5 (4-Shot)| 2.19|
|GPQA (0-shot) |12.53|
|MuSR (0-shot) |15.18|
|MMLU-PRO (5-shot) |36.77|
|
timaeus/dm16
|
timaeus
| 2024-10-18T07:06:29Z | 7 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2024-10-18T07:05:54Z |
# dm16 Checkpoints
This repository contains the final trained model and intermediate checkpoints.
- The main directory contains the fully trained model (checkpoint 75000).
- The `checkpoints` directory contains all intermediate checkpoints.
|
timaeus/dm8
|
timaeus
| 2024-10-18T07:05:50Z | 8 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2024-10-18T07:05:11Z |
# dm8 Checkpoints
This repository contains the final trained model and intermediate checkpoints.
- The main directory contains the fully trained model (checkpoint 75000).
- The `checkpoints` directory contains all intermediate checkpoints.
|
zeynepcetin/distilbert-base-uncased-zeynepc
|
zeynepcetin
| 2024-10-18T07:04:23Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-14T08:40:08Z |
---
base_model: distilbert-base-uncased
library_name: transformers
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert-base-uncased-zeynepc
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-zeynepc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.44.2
- TensorFlow 2.17.0
- Tokenizers 0.19.1
|
aravindvelmurugan/base-qa-v1
|
aravindvelmurugan
| 2024-10-18T07:00:43Z | 121 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"question-answering",
"en",
"dataset:rajpurkar/squad",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-10-18T06:49:55Z |
---
license: mit
datasets:
- rajpurkar/squad
language:
- en
metrics:
- bleu
base_model:
- google-bert/bert-base-uncased
new_version: google-bert/bert-base-uncased
library_name: transformers
---
|
Cony7010/my-bert-base-uncased
|
Cony7010
| 2024-10-18T06:54:18Z | 161 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-10-18T06:53:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Leejy0-0/gemma-2b-it-sum-ko
|
Leejy0-0
| 2024-10-18T06:49:09Z | 2,000 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-18T04:57:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
spneshaei/reflectium_and_mindmate_fold4
|
spneshaei
| 2024-10-18T06:48:09Z | 108 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-18T06:47:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
spneshaei/reflectium_and_mindmate_fold3
|
spneshaei
| 2024-10-18T06:38:05Z | 110 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-18T06:37:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tenna18/distilbert-base-uncased-finetuned-clinc
|
tenna18
| 2024-10-18T06:34:39Z | 5 | 0 | null |
[
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2024-10-14T09:48:29Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7721
- Accuracy: 0.9181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2895 | 1.0 | 318 | 3.2884 | 0.7419 |
| 2.6277 | 2.0 | 636 | 1.8751 | 0.8368 |
| 1.5479 | 3.0 | 954 | 1.1569 | 0.8961 |
| 1.0148 | 4.0 | 1272 | 0.8573 | 0.9132 |
| 0.7952 | 5.0 | 1590 | 0.7721 | 0.9181 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
jimbowyer123/otterchess
|
jimbowyer123
| 2024-10-18T06:25:09Z | 221 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-17T00:06:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Falah/al-halbousi
|
Falah
| 2024-10-18T06:13:00Z | 5 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-10-17T06:14:31Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Al Halbousi
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Falah/al-halbousi', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
timaeus/dm1024
|
timaeus
| 2024-10-18T06:09:45Z | 5 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2024-10-18T04:52:45Z |
# dm1024 Checkpoints
This repository contains the final trained model and intermediate checkpoints.
- The main directory contains the fully trained model (checkpoint 75000).
- The `checkpoints` directory contains all intermediate checkpoints.
|
Swekerr/Text2SqLlama-3B-GGUF
|
Swekerr
| 2024-10-18T06:06:12Z | 25 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"dataset:gretelai/synthetic_text_to_sql",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-18T05:40:16Z |
---
base_model: unsloth/llama-3.2-3b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
datasets:
- gretelai/synthetic_text_to_sql
library_name: transformers
---
# Uploaded model
- **Developed by:** Swekerr
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ST23423/gpt2-pretrained-custom-eli5tech
|
ST23423
| 2024-10-18T06:00:31Z | 135 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-18T06:00:04Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
QuantFactory/Gemma-2-Ataraxy-v4c-9B-GGUF
|
QuantFactory
| 2024-10-18T05:47:04Z | 20 | 1 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"base_model:lemon07r/Gemma-2-Ataraxy-v3b-9B",
"base_model:merge:lemon07r/Gemma-2-Ataraxy-v3b-9B",
"base_model:zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25",
"base_model:merge:zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-18T05:21:38Z |
---
library_name: transformers
tags:
- mergekit
- merge
base_model:
- lemon07r/Gemma-2-Ataraxy-v3b-9B
- zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25
model-index:
- name: Gemma-2-Ataraxy-v4c-9B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 69.45
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v4c-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 44.13
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v4c-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 17.98
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v4c-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 11.19
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v4c-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 15.3
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v4c-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 37.72
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v4c-9B
name: Open LLM Leaderboard
---
[](https://hf.co/QuantFactory)
# QuantFactory/Gemma-2-Ataraxy-v4c-9B-GGUF
This is quantized version of [lemon07r/Gemma-2-Ataraxy-v4c-9B](https://huggingface.co/lemon07r/Gemma-2-Ataraxy-v4c-9B) created using llama.cpp
# Original Model Card
# Gemma-2-Ataraxy-v4c-9B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [lemon07r/Gemma-2-Ataraxy-v3b-9B](https://huggingface.co/lemon07r/Gemma-2-Ataraxy-v3b-9B)
* [zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25](https://huggingface.co/zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25
dtype: bfloat16
merge_method: slerp
parameters:
t: 0.25
slices:
- sources:
- layer_range: [0, 42]
model: zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25
- layer_range: [0, 42]
model: lemon07r/Gemma-2-Ataraxy-v3b-9B
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lemon07r__Gemma-2-Ataraxy-v4c-9B)
| Metric |Value|
|-------------------|----:|
|Avg. |32.63|
|IFEval (0-Shot) |69.45|
|BBH (3-Shot) |44.13|
|MATH Lvl 5 (4-Shot)|17.98|
|GPQA (0-shot) |11.19|
|MuSR (0-shot) |15.30|
|MMLU-PRO (5-shot) |37.72|
|
lightsout19/gpt2-moe-top2-4-partitioned-cola
|
lightsout19
| 2024-10-18T05:43:10Z | 5 | 0 | null |
[
"tensorboard",
"safetensors",
"gpt2",
"generated_from_trainer",
"region:us"
] | null | 2024-10-18T05:35:46Z |
---
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: gpt2-moe-top2-4-partitioned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-moe-top2-4-partitioned-cola
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6223
- Matthews Correlation: -0.0359
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 268 | 0.6221 | 0.0 |
| 0.6123 | 2.0 | 536 | 0.6191 | 0.0 |
| 0.6123 | 3.0 | 804 | 0.6162 | 0.0 |
| 0.6031 | 4.0 | 1072 | 0.6129 | 0.0 |
| 0.6031 | 5.0 | 1340 | 0.6088 | 0.0 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
|
Lichang-Chen/llama3-8b-point27-template
|
Lichang-Chen
| 2024-10-18T05:38:18Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-18T05:34:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
cuongdev/sieusieu
|
cuongdev
| 2024-10-18T05:36:28Z | 29 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-10-18T05:32:38Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### sieusieu Dreambooth model trained by cuongdev with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
lhstest/testllama32s
|
lhstest
| 2024-10-18T05:29:58Z | 77 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-10-18T05:21:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BroAlanTaps/GPT2-large-4-54000steps
|
BroAlanTaps
| 2024-10-18T05:29:21Z | 136 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-18T05:27:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
t666moriginal/timmyboy
|
t666moriginal
| 2024-10-18T05:27:12Z | 17 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-10-18T05:27:09Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Timmyboy
---
# Timmyboy
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Timmyboy` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('t666moriginal/timmyboy', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
lovis93/testllm
|
lovis93
| 2024-10-18T05:17:53Z | 7,102 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"image-generation",
"flux",
"en",
"license:other",
"endpoints_compatible",
"diffusers:FluxPipeline",
"region:us"
] |
text-to-image
| 2024-10-17T12:24:36Z |
---
language:
- en
license: other
license_name: flux-1-dev-non-commercial-license
license_link: LICENSE.md
extra_gated_prompt: By clicking "Agree", you agree to the [FluxDev Non-Commercial License Agreement](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md)
and acknowledge the [Acceptable Use Policy](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/POLICY.md).
tags:
- text-to-image
- image-generation
- flux
---
![FLUX.1 [dev] Grid](./dev_grid.jpg)
`FLUX.1 [dev]` is a 12 billion parameter rectified flow transformer capable of generating images from text descriptions.
For more information, please read our [blog post](https://blackforestlabs.ai/announcing-black-forest-labs/).
# Key Features
1. Cutting-edge output quality, second only to our state-of-the-art model `FLUX.1 [pro]`.
2. Competitive prompt following, matching the performance of closed source alternatives .
3. Trained using guidance distillation, making `FLUX.1 [dev]` more efficient.
4. Open weights to drive new scientific research, and empower artists to develop innovative workflows.
5. Generated outputs can be used for personal, scientific, and commercial purposes as described in the [`FLUX.1 [dev]` Non-Commercial License](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
# Usage
We provide a reference implementation of `FLUX.1 [dev]`, as well as sampling code, in a dedicated [github repository](https://github.com/black-forest-labs/flux).
Developers and creatives looking to build on top of `FLUX.1 [dev]` are encouraged to use this as a starting point.
## API Endpoints
The FLUX.1 models are also available via API from the following sources
- [bfl.ml](https://docs.bfl.ml/) (currently `FLUX.1 [pro]`)
- [replicate.com](https://replicate.com/collections/flux)
- [fal.ai](https://fal.ai/models/fal-ai/flux/dev)
- [mystic.ai](https://www.mystic.ai/black-forest-labs/flux1-dev)
## ComfyUI
`FLUX.1 [dev]` is also available in [Comfy UI](https://github.com/comfyanonymous/ComfyUI) for local inference with a node-based workflow.
## Diffusers
To use `FLUX.1 [dev]` with the 🧨 diffusers python library, first install or upgrade diffusers
```shell
pip install -U diffusers
```
Then you can use `FluxPipeline` to run the model
```python
import torch
from diffusers import FluxPipeline
pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16)
pipe.enable_model_cpu_offload() #save some VRAM by offloading the model to CPU. Remove this if you have enough GPU power
prompt = "A cat holding a sign that says hello world"
image = pipe(
prompt,
height=1024,
width=1024,
guidance_scale=3.5,
num_inference_steps=50,
max_sequence_length=512,
generator=torch.Generator("cpu").manual_seed(0)
).images[0]
image.save("flux-dev.png")
```
To learn more check out the [diffusers](https://huggingface.co/docs/diffusers/main/en/api/pipelines/flux) documentation
---
# Limitations
- This model is not intended or able to provide factual information.
- As a statistical model this checkpoint might amplify existing societal biases.
- The model may fail to generate output that matches the prompts.
- Prompt following is heavily influenced by the prompting-style.
# Out-of-Scope Use
The model and its derivatives may not be used
- In any way that violates any applicable national, federal, state, local or international law or regulation.
- For the purpose of exploiting, harming or attempting to exploit or harm minors in any way; including but not limited to the solicitation, creation, acquisition, or dissemination of child exploitative content.
- To generate or disseminate verifiably false information and/or content with the purpose of harming others.
- To generate or disseminate personal identifiable information that can be used to harm an individual.
- To harass, abuse, threaten, stalk, or bully individuals or groups of individuals.
- To create non-consensual nudity or illegal pornographic content.
- For fully automated decision making that adversely impacts an individual's legal rights or otherwise creates or modifies a binding, enforceable obligation.
- Generating or facilitating large-scale disinformation campaigns.
# License
This model falls under the [`FLUX.1 [dev]` Non-Commercial License](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
Vikhrmodels/Vikhr-7B-instruct_0.2
|
Vikhrmodels
| 2024-10-18T05:16:39Z | 182 | 21 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"ru",
"en",
"dataset:zjkarina/Vikhr_instruct",
"dataset:dichspace/darulm",
"arxiv:2405.13929",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-12T10:35:20Z |
---
language:
- ru
- en
datasets:
- zjkarina/Vikhr_instruct
- dichspace/darulm
---
GGUF версия: https://huggingface.co/pirbis/Vikhr-7B-instruct_0.2-GGUF
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
import torch
import os
os.environ['HF_HOME']='.'
MODEL_NAME = "Vikhrmodels/Vikhr-7B-instruct_0.2"
DEFAULT_MESSAGE_TEMPLATE = "<s>{role}\n{content}</s>\n"
DEFAULT_SYSTEM_PROMPT = "Ты — Вихрь, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им."
class Conversation:
def __init__(
self,
message_template=DEFAULT_MESSAGE_TEMPLATE,
system_prompt=DEFAULT_SYSTEM_PROMPT,
):
self.message_template = message_template
self.messages = [{
"role": "system",
"content": system_prompt
}]
def add_user_message(self, message):
self.messages.append({
"role": "user",
"content": message
})
def get_prompt(self, tokenizer):
final_text = ""
for message in self.messages:
message_text = self.message_template.format(**message)
final_text += message_text
final_text += 'bot'
return final_text.strip()
def generate(model, tokenizer, prompt, generation_config):
data = tokenizer(prompt, return_tensors="pt")
data = {k: v.to(model.device) for k, v in data.items()}
output_ids = model.generate(
**data,
generation_config=generation_config
)[0]
output_ids = output_ids[len(data["input_ids"][0]):]
output = tokenizer.decode(output_ids, skip_special_tokens=True)
return output.strip()
#config = PeftConfig.from_pretrained(MODEL_NAME)
model = AutoModelForCausalLM.from_pretrained(
MODEL_NAME,
load_in_8bit=True,
torch_dtype=torch.float16,
device_map="auto"
)
#model = PeftModel.from_pretrained( model, MODEL_NAME, torch_dtype=torch.float16)
model.eval()
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, use_fast=False)
generation_config = GenerationConfig.from_pretrained(MODEL_NAME)
generation_config.max_length=256
generation_config.top_p=0.9
generation_config.top_k=30
generation_config.do_sample = True
print(generation_config)
inputs = ["Как тебя зовут?", "Кто такой Колмогоров?"]
for inp in inputs:
conversation = Conversation()
conversation.add_user_message(inp)
prompt = conversation.get_prompt(tokenizer)
output = generate(model, tokenizer, prompt, generation_config)
print(inp)
print(output)
print('\n')
```
[wandb](https://wandb.ai/karina_romanova/vikhr/runs/up2hw5eh?workspace=user-karina_romanova)
## Cite
```
@inproceedings{nikolich2024vikhr,
title={Vikhr: Constructing a State-of-the-art Bilingual Open-Source Instruction-Following Large Language Model for {Russian}},
author={Aleksandr Nikolich and Konstantin Korolev and Sergei Bratchikov and Igor Kiselev and Artem Shelmanov },
booktitle = {Proceedings of the 4rd Workshop on Multilingual Representation Learning (MRL) @ EMNLP-2024}
year={2024},
publisher = {Association for Computational Linguistics},
url={https://arxiv.org/pdf/2405.13929}
}
```
|
clw8998/Product-Classification-Model-Distilled
|
clw8998
| 2024-10-18T05:10:40Z | 35,483 | 0 | null |
[
"onnx",
"bert",
"license:apache-2.0",
"region:us"
] | null | 2024-10-18T05:09:57Z |
---
license: apache-2.0
---
|
theprint/CleverBoi-Llama-3.1-8B-v2
|
theprint
| 2024-10-18T05:07:41Z | 51 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"theprint",
"cleverboi",
"text-generation",
"conversational",
"en",
"dataset:theprint/CleverBoi-Data-20k",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-15T03:27:16Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
- theprint
- cleverboi
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
datasets:
- theprint/CleverBoi-Data-20k
pipeline_tag: text-generation
model-index:
- name: CleverBoi-Llama-3.1-8B-v2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 19.61
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=theprint/CleverBoi-Llama-3.1-8B-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 24.13
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=theprint/CleverBoi-Llama-3.1-8B-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 4.46
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=theprint/CleverBoi-Llama-3.1-8B-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 4.81
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=theprint/CleverBoi-Llama-3.1-8B-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 6.72
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=theprint/CleverBoi-Llama-3.1-8B-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 24.31
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=theprint/CleverBoi-Llama-3.1-8B-v2
name: Open LLM Leaderboard
---
<img src="https://huggingface.co/theprint/CleverBoi-Gemma-2-9B/resolve/main/cleverboi.png"/>
# CleverBoi
The CleverBoi series is based on models that have been fine tuned on a collection of data sets that emphasize logic, inference, math and coding, also known as the CleverBoi data set.
This model has been fine tuned for 1 epoch on the CleverBoi-Data-20k data set.
# Uploaded model
- **Developed by:** theprint
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_theprint__CleverBoi-Llama-3.1-8B-v2)
| Metric |Value|
|-------------------|----:|
|Avg. |14.01|
|IFEval (0-Shot) |19.61|
|BBH (3-Shot) |24.13|
|MATH Lvl 5 (4-Shot)| 4.46|
|GPQA (0-shot) | 4.81|
|MuSR (0-shot) | 6.72|
|MMLU-PRO (5-shot) |24.31|
|
QuantFactory/Gemma-2-Ataraxy-v4b-9B-GGUF
|
QuantFactory
| 2024-10-18T05:05:54Z | 64 | 1 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"base_model:lemon07r/Gemma-2-Ataraxy-v3b-9B",
"base_model:merge:lemon07r/Gemma-2-Ataraxy-v3b-9B",
"base_model:zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25",
"base_model:merge:zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-18T03:57:36Z |
---
base_model:
- zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25
- lemon07r/Gemma-2-Ataraxy-v3b-9B
library_name: transformers
tags:
- mergekit
- merge
---
[](https://hf.co/QuantFactory)
# QuantFactory/Gemma-2-Ataraxy-v4b-9B-GGUF
This is quantized version of [lemon07r/Gemma-2-Ataraxy-v4b-9B](https://huggingface.co/lemon07r/Gemma-2-Ataraxy-v4b-9B) created using llama.cpp
# Original Model Card
# Gemma-2-Ataraxy-v4b-9B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25](https://huggingface.co/zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25)
* [lemon07r/Gemma-2-Ataraxy-v3b-9B](https://huggingface.co/lemon07r/Gemma-2-Ataraxy-v3b-9B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: lemon07r/Gemma-2-Ataraxy-v3b-9B
dtype: bfloat16
merge_method: slerp
parameters:
t:
- filter: self_attn
value: [0.0, 0.5, 0.3, 0.7, 1.0]
- filter: mlp
value: [1.0, 0.5, 0.7, 0.3, 0.0]
- value: 0.5
slices:
- sources:
- layer_range: [0, 42]
model: zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25
- layer_range: [0, 42]
model: lemon07r/Gemma-2-Ataraxy-v3b-9B
```
|
mhbkb/stable-diffusion-base-2.0-text-to-image-1
|
mhbkb
| 2024-10-18T05:03:25Z | 14 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"base_model:stabilityai/stable-diffusion-2",
"base_model:finetune:stabilityai/stable-diffusion-2",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-10-18T04:07:40Z |
---
base_model: stabilityai/stable-diffusion-2
library_name: diffusers
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Text-to-image finetuning - mhbkb/stable-diffusion-base-2.0-text-to-image-1
This pipeline was finetuned from **stabilityai/stable-diffusion-2** on the **None** dataset. Below are some example images generated with the finetuned pipeline using the following prompts: ['a photo of a dog']:

## Pipeline usage
You can use the pipeline like so:
```python
from diffusers import DiffusionPipeline
import torch
pipeline = DiffusionPipeline.from_pretrained("mhbkb/stable-diffusion-base-2.0-text-to-image-1", torch_dtype=torch.float16)
prompt = "a photo of a dog"
image = pipeline(prompt).images[0]
image.save("my_image.png")
```
## Training info
These are the key hyperparameters used during training:
* Epochs: 20
* Learning rate: 1e-05
* Batch size: 1
* Gradient accumulation steps: 8
* Image resolution: 768
* Mixed-precision: fp16
More information on all the CLI arguments and the environment are available on your [`wandb` run page](https://wandb.ai/javabkb-university-of-arizona/text2image-fine-tune/runs/qzljwzgz).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
xonic48/distilbert-base-uncased-finetuned-imdb
|
xonic48
| 2024-10-18T04:58:26Z | 135 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-10-18T03:39:24Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4894
- Model Preparation Time: 0.0018
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|
| 2.6819 | 1.0 | 157 | 2.4978 | 0.0018 |
| 2.5872 | 2.0 | 314 | 2.4488 | 0.0018 |
| 2.527 | 3.0 | 471 | 2.4823 | 0.0018 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
mohsen22/tamin_llama3.2
|
mohsen22
| 2024-10-18T04:56:59Z | 77 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-10-18T04:20:07Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nicholasbien/gpt2_finetuned-lmd_full-rm
|
nicholasbien
| 2024-10-18T04:54:12Z | 105 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2024-10-18T03:19:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
pwork7/mag_baseline_iter3
|
pwork7
| 2024-10-18T04:53:52Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-18T04:50:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aningddd/swinv2-base
|
aningddd
| 2024-10-18T04:45:51Z | 135 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"swinv2",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/swinv2-base-patch4-window12-192-22k",
"base_model:finetune:microsoft/swinv2-base-patch4-window12-192-22k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-10-09T13:34:27Z |
---
library_name: transformers
license: apache-2.0
base_model: microsoft/swinv2-base-patch4-window12-192-22k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: swinv2-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-base
This model is a fine-tuned version of [microsoft/swinv2-base-patch4-window12-192-22k](https://huggingface.co/microsoft/swinv2-base-patch4-window12-192-22k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1570
- Accuracy: 0.9577
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1001 | 1.0 | 240 | 1.0510 | 0.5701 |
| 0.7709 | 2.0 | 480 | 0.7516 | 0.7091 |
| 0.5077 | 3.0 | 720 | 0.5670 | 0.7953 |
| 0.2908 | 4.0 | 960 | 0.3946 | 0.8650 |
| 0.1676 | 5.0 | 1200 | 0.2796 | 0.9038 |
| 0.117 | 6.0 | 1440 | 0.2322 | 0.9275 |
| 0.0634 | 7.0 | 1680 | 0.2433 | 0.9306 |
| 0.0425 | 8.0 | 1920 | 0.1843 | 0.9490 |
| 0.0252 | 9.0 | 2160 | 0.1653 | 0.9543 |
| 0.0147 | 10.0 | 2400 | 0.1570 | 0.9577 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Tokenizers 0.19.1
|
timaeus/dm128
|
timaeus
| 2024-10-18T04:45:08Z | 5 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2024-10-18T04:43:26Z |
# dm128 Checkpoints
This repository contains the final trained model and intermediate checkpoints.
- The main directory contains the fully trained model (checkpoint 75000).
- The `checkpoints` directory contains all intermediate checkpoints.
|
timaeus/dm64
|
timaeus
| 2024-10-18T04:43:23Z | 7 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2024-10-18T04:42:24Z |
# dm64 Checkpoints
This repository contains the final trained model and intermediate checkpoints.
- The main directory contains the fully trained model (checkpoint 75000).
- The `checkpoints` directory contains all intermediate checkpoints.
|
timaeus/dm32
|
timaeus
| 2024-10-18T04:42:18Z | 5 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2024-10-18T04:41:26Z |
# dm32 Checkpoints
This repository contains the final trained model and intermediate checkpoints.
- The main directory contains the fully trained model (checkpoint 75000).
- The `checkpoints` directory contains all intermediate checkpoints.
|
Lucia-no/sn29_C00_O18_0
|
Lucia-no
| 2024-10-18T04:40:18Z | 40 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-18T04:36:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
pwork7/mag_baseline_iter1
|
pwork7
| 2024-10-18T04:38:43Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-18T04:35:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Nguyen17/Dev5
|
Nguyen17
| 2024-10-18T04:34:19Z | 32 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"trl",
"ddpo",
"reinforcement-learning",
"text-to-image",
"stable-diffusion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-10-18T04:33:07Z |
---
license: apache-2.0
tags:
- trl
- ddpo
- diffusers
- reinforcement-learning
- text-to-image
- stable-diffusion
---
# TRL DDPO Model
This is a diffusion model that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for image generation conditioned with text.
|
AndreaUnibo/JetMoE_rank_infill_updated_all
|
AndreaUnibo
| 2024-10-18T04:26:13Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"jetmoe",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-10-09T22:27:47Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lightsout19/gpt2-moe-top8-8-partitioned-sst2
|
lightsout19
| 2024-10-18T04:15:18Z | 6 | 0 | null |
[
"tensorboard",
"safetensors",
"gpt2",
"generated_from_trainer",
"region:us"
] | null | 2024-10-18T03:21:00Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: gpt2-moe-top8-8-partitioned-sst2-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-moe-top8-8-partitioned-sst2-new
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3598
- Accuracy: 0.8773
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3266 | 1.0 | 2105 | 0.3459 | 0.8601 |
| 0.2444 | 2.0 | 4210 | 0.3592 | 0.875 |
| 0.2089 | 3.0 | 6315 | 0.3721 | 0.8635 |
| 0.1749 | 4.0 | 8420 | 0.3589 | 0.8784 |
| 0.153 | 5.0 | 10525 | 0.3887 | 0.8773 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
|
kit-nlp/Roberta-Phishing
|
kit-nlp
| 2024-10-18T04:13:16Z | 7 | 0 | null |
[
"safetensors",
"roberta",
"doi:10.57967/hf/3271",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2024-10-18T04:05:53Z |
---
license: cc-by-nc-sa-4.0
---
|
mitra-mir/t5-esg
|
mitra-mir
| 2024-10-18T04:07:49Z | 116 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-10-18T04:06:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
meandyou200175/paraphrase-multilingual-MiniLM-L12-v2_finetune
|
meandyou200175
| 2024-10-18T04:07:20Z | 9 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:43000",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-10-17T15:37:52Z |
---
base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
library_name: sentence-transformers
metrics:
- map
- mrr@1
- ndcg@1
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:43000
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: Viên nén Paralmax 500mg Boston điều trị đau từ nhẹ đến vừa, đau
đầu, đau răng (10 vỉ x 12 viên)
sentences:
- 'Mô tả ngắn:
Thuốc Paralmax là sản phẩm của Dược phẩm Boston Việt Nam chứa hoạt chất Paracetamol
dùng điều trị triệu chứng đau từ nhẹ đến vừa như: Đau đầu, đau răng, đau bụng
kinh, đau do thấp khớp, nhức mỏi cơ, cảm cúm. Hạ sốt trong các chứng sốt do cảm
cúm hoặc do các chứng nhiễm trùng đường hô hấp.
Thành phần:
Paracetamol: 500mg
Chỉ định:
Thuốc Paralmax chỉ định dùng điều trị trong các trường hợp sau:
Triệu chứng đau từ nhẹ đến vừa như: Đau đầu, đau răng, đau bụng kinh, đau do thấp
khớp, nhức mỏi cơ, cảm cúm. Hạ sốt trong các chứng sốt do cảm cúm hoặc do các
chứng nhiễm trùng đường hô hấp.'
- "Mô tả ngắn:\nTaniki 80 chứa Cao bạch quả (Ginkgo biloba extract) 80mg do \
\ Công ty TNHH SX-TM Dược phẩm NIC (NIC pharm) , Việt Nam sản xuất. Taniki 80\
\ được chỉ định dùng trong các trường hợp suy tuần hoàn, phòng ngừa và làm chậm\
\ quá trình tiến triển của bệnh Alzheimer, điều trị chứng đau cách hồi của bệnh\
\ tắc động mạch chi dưới mãn tính (giai đoạn II), cải thiện hội chứng Raynaud\
\ . \n Thuốc có quy cách đóng gói gồm hộp 10 vỉ x 10 viên nén bao phim.\nThành\
\ phần:\nCao bạch quả: 80mg\nChỉ định:\nThuốc Taniki 80 được chỉ định dùng trong\
\ các trường hợp sau:\nSuy tuần hoàn với các biểu hiện: Chóng mặt, nhức đầu, giảm\
\ trí nhớ, giảm khả năng nhận thức, rối loạn vận động, rối loạn cảm xúc và nhân\
\ cách. Phòng ngừa và làm chậm quá trình tiến triển của bệnh Alzheimer (bệnh sa\
\ sút trí tuệ ở người lớn tuổi). Điều trị chứng đau cách hồi của bệnh tắc động\
\ mạch chi dưới mãn tính (giai đoạn II). Cải thiện hội chứng Raynaud. Được đề\
\ nghị trong vài hội chứng chóng mặt hoặc ù tai, vài loại giảm thính lực, được\
\ xem như thiếu máu cục bộ. Được đề nghị trong vài loại suy võng mạc có thể do\
\ nguyên nhân thiếu máu cục bộ."
- Tiếng kêu trong đầu có thể do ảo thanh gây ra Chào bác, Theo thông tin bác cung
cấp, tiếng lách tách trong đầu có thể là tiếng mở của vòi nhĩ phải (thuộc về bệnh
lý của tai mũi họng), cũng có thể là tiếng ảo thanh (trong bệnh lý tâm thần).
Do đó, bác cần đến khám kiểm tra tại chuyên khoa Tai mũi họng trước, đem kết quả
kiểm tra đến khám lại tại bệnh viện Tâm thần trung ương, bác nhé. Thân mến.
- source_sentence: "Kính gửi BS,\r\n\r\nHiện em đang dùng thuốc tránh thai hàng ngày\
\ diane35 vỉ 21 viên. Em đã uống hết vỉ đầu tiên, ngày uống đầu tiên vào ngày\
\ thứ 2 sau khi hết kinh. Em muốn hỏi BS một số vấn đề sau, mong BS giải đáp\
\ giùm:\r\n\r\n1. Trong thời gian ngừng 07 ngày trước khi uống vỉ tiếp theo, trong\
\ thời gian này quan hệ (mà không dùng biện pháp tránh thai nào khác) thì có bị\
\ dính thai hay không? Và trong trường hợp ngừng thuốc 07 ngày mà không có kinh\
\ thì quan hệ có thai không?\r\n\r\n2. Sau 06 tháng dùng thuốc thì dừng 1 tháng,\
\ việc dừng đột ngột có ảnh hưởng gì đến sức khỏe sinh sản và chu kỳ kinh nguyệt\
\ hay không?\r\n\r\nMong bác sĩ trả lời sớm giúp em."
sentences:
- "Chào bạn, Theo AloBacsi, bạn nên đi cấy mủ niệu đạo, và điều trị cho\r\nkỳ hết\
\ nhé. Khi có thai người mẹ có cơ chế bảo vệ thai nhi, nhưng nếu\r\nngười mẹ có\
\ bệnh ở âm đạo thì sinh con sẽ dính vào mắt con. Vì vậy bạn cho vợ\r\nđi khám\
\ thai và kể cho BS sản phụ khoa để bác ấy điều trị luôn cho vợ bạn. BS\r\nsản\
\ phụ sẽ có cách phòng ngừa cho thai nhi. Chúc gia đình bạn nhiều sức khỏe. Thân\
\ mến,"
- 'Chào em, Thuốc Diane 35 có thành phần Cyproterone acetate 2 mg và Ethinylestradiol
0,035 mg, dùng để tránh thai và điều trị những bệnh mà nguyên nhân của nó là do
hoặc tăng sản xuất androgens hoặc do nhạy cảm đặc biệt với hormone này như mụn
trứng cá, rậm lông ở nữ... Thuốc tránh thai hàng ngày nhằm điều chỉnh hormone
của cơ thể, làm cho trứng không rụng. Thuốc còn làm mỏng niêm mạc tử cung để trứng
nếu được thụ tinh thì sẽ khó làm tổ; làm đặc chất nhày cổ tử cung cản trở tinh
trùng đi qua, giảm sự di chuyển của tinh trùng trong ống dẫn trứng. Thuốc tránh
thai hàng ngày loại 21 viên, ngưng thuốc 7 ngày, trong thời gian đó sẽ có kinh,
nhưng cũng có một số người ít mất kinh nhưng không ảnh hưởng gì cả. Trong thời
gian 7 ngày đó cũng như những ngày tiếp theo trong suốt quá trình uống thuốc,
có thể quan hệ bình thường và hoàn toàn yên tâm. Với điều kiện tuân thủ uống đúng
theo qui định: uống viên đầu tiên vào ngày có kinh thứ 1, và uống đều đặn vào
cùng thời điểm mỗi ngày. Tuy nhiên, trong vòng 5 ngày đầu, nếu uống viên đầu tiên
thì cũng có hiệu quả. Như vậy, trường hợp của em nếu uống đầy đủ và đều đặn, đúng
giờ, thì trong thời gian ngừng thuốc 7 ngày, dù không có kinh em vẫn có thể quan
hệ bình thường và hoàn toàn yên tâm, không lo có thai nhé. Thuốc tránh thai uống
hàng ngày sau thời gian dài uống thuốc, em có thể ngưng mà không ảnh hưởng gì
đến sức khỏe sinh sản. Tuy nhiên chu kỳ kinh có thể thay đổi, vì sau khi ngưng
thuốc chu kỳ kinh là do cơ thể em tự điều chỉnh hormon sinh dục, không phụ thuộc
vào thuốc nữa. Chúc em luôn có sức khỏe tốt!'
- Chào bạn, Thời gian lành xương trung bình 4 - 6 tuần sau gãy, khoảng 6
tháng có thể trở lại sinh hoạt bình thường, và một số vận động mạnh nếu hồi phục
tốt. Chỉ định lấy vít đặt ra khi xương đã lành hẳn, thường là ít nhất sau 1 năm
đối với các gãy xương chi dưới, còn phụ thuộc vào vị trí gãy. Tuy nhiên, đá bóng
là một môn thể thao đối kháng nặng và nguy cơ va chạm cao nên trong giai
đoạn chưa hồi phục hoàn toàn phải hạn chế các vận động quá mạnh như thế này. Hiện
tại, bạn nên tích cực vật lý trị liệu và tái khám thường xuyên tại chuyên
khoa Chấn thương Chỉnh hình để được đánh giá quá trình hồi phục và
xác định hướng can thiệp tiếp theo bạn nhé. Thân mến.
- source_sentence: "Thưa các bác sĩ,\r\n\r\nCháu năm nay 22 tuổi, đi khám phụ khoa\
\ bác sĩ bảo bị mắc bệnh sùi mào gà. Cả cháu và bạn trai đều không quan hệ lăng\
\ nhăng, có khi nào chúng cháu lây bệnh từ tấm trải giường và khăn ở khách sạn\
\ không vậy bác sĩ?\r\n\r\nCháu bị sùi mào gà nhưng lại đi tiểu thấy buốt và thỉnh\
\ thoảng ra máu, khi đi khám ở bệnh viện Da Liễu TPHCM đã được bôi dung dịch acid\
\ và cháu cũng đã đi xét nghiệm các bệnh lây lan qua đường tình dục khác (HIV,\
\ lậu, giang mai,...), kết quả đều âm tính. Vậy liệu đi tiểu buốt và ra máu có\
\ phải là một triệu chứng kèm theo của bệnh sùi mào gà không? Hay còn của bệnh\
\ lý nào khác? Liệu cháu chỉ bôi thuốc thì có hết không hay là phải đốt mới hết?\r\
\n\r\nBS có thể chỉ cho cháu cách vệ sinh vùng kín trong thời gian này? Cháu đọc\
\ trên mạng thấy người ta bảo rửa bằng trà và cháu cũng đang làm theo như thế.\r\
\n\r\nCháu chờ mong sự tư vấn của các bác sĩ!"
sentences:
- ' Chào em, Triệu chứng , chán ăn của em nhiều khả năng do yếu tố tâm lý quá căng
thẳng (stress), mà stress có thể gây ảnh hưởng lên toàn thân chứ không chỉ cơ
quan tiêu hóa như gây mệt mỏi, kém tập trung, suy nhược cơ thể, mất ngủ... Tuy
nhiên, em cũng cần đến bs ck tiêu hóa để kiểm tra xem có nhiễm Hp gây viêm loét
dạ dày tá tràng hay không, vì đây cũng là một bệnh lý khá thường gặp và nếu có
nhiễm Hp gây triệu chứng dạ dày thì cần điều trị kháng sinh, chống tiết acid mới
khỏi hẳn. Nhưng dù em có hay không có nhiễm Hp thì chắc chắn yếu tố tâm lý đang
đè nặng lên em và góp phần gây rối loạn tiêu hóa. Em nên giải tỏa tâm lý cho bản
thân bằng một lịch học - làm việc - ăn uống - nghỉ ngơi hợp lý, có thể ăn thành
nhiều bữa với các thực phẩm dễ tiêu, phong phú, nhiều dinh dưỡng, ít dầu mỡ, thêm
rau xanh, trái cây, uống đủ nước, hạn chế cà phê, rượu bia và không hút thuốc
lá, cố gắng tập thể dục điều độ sẽ mang lại sức sống mới cho em. Thân mến! '
- 'Chào em, Sùi mào gà do Papilloma virus (HPV) gây ra và là một trong những bệnh
lây truyền qua đường tình dục, có thể gặp ở cả nam và nữ, bệnh có thể có biểu
hiện triệu chứng lâm sàng hoặc không có triệu chứng, thời gian ủ bệnh lại kéo
dài, từ 2-9 tháng sau khi tiếp xúc. Do đó, sùi mào gà rất dễ lay lan cho bạn tình,
nhiều khi bệnh nhân không nhớ rõ mình bị lây nhiễm từ lúc nào. Việc truyền bệnh
chủ yếu là do một trong hai người giao phối lây nhiễm cho nhau , nên đừng đỗi
lỗi cho khách sạn hết, em nhé. Em đã làm xét nghiệm tầm soát hết các bệnh lây
lan qua đường tình dục khác là rất tốt. Tuy nhiên, bệnh có nguy cơ gây ung thư
cổ tử cung, âm đạo, dương vật và hậu môn, nên bệnh nhân nữ bị sùi mào gà cổ tử
cung cần phải làm xét nghiệm phiến đồ cổ tử cung định kỳ hàng năm để phát hiện
sớm ung thư. Triệu chứng của bệnh sùi mào gà chủ yếu là các u nhú màu hồng tươi,
mềm, có chân hoặc có cuống, không đau và dễ chảy máu, bệnh không có biểu hiện
tiểu buốt. Có thể em có bệnh lý viêm nhiễm đường tiết niệu đi kèm, em làm thêm
xét nghiệm nước tiểu và siêu âm bụng để có chẩn đoán sớm nhé. Nốt sùi có thể mọc
bất kỳ chỗ nào ở vùng kín: âm hộ, hậu môn, dương vật, lỗ niệu đạo, cổ tử cung…
thậm chí gặp cả ở miệng, họng. Việc điều trị sùi mào gà chủ yếu nhằm phá hủy tổn
thương sùi chứ không thể tiêu diệt được virus, tùy thuộc vào mức độ tổn thương
BS sẽ có hướng chọn lựa phương pháp điều trị thích hợp cho em. Việc quan trọng
của em bây giờ là tuân thủ điều trị và theo dõi tái khám đúng lịch hẹn . Sau khi
điều trị khỏi hoàn toàn ít nhất là 3 tháng mới nên có quan hệ tình dục để tránh
lây cho vợ, chồng hoặc bạn tình. Việc vệ sinh vùng kín trong bệnh lý này cũng
bình thường như lúc em chưa mắc bệnh. Còn vệ sinh vùng kín bằng nước trà chưa
thấy y học đề cập đến vấn đề này. Em chú ý khi tham khảo những thông tin trên
mạng, phải xem rõ trang web đó có đáng tin cậy không, thông tin do tác giả nào
viết… nhất là việc liên quan đến sức khỏe lại càng phải cẩn thận hơn, em nhé!
Thân ái!'
- "Hình minh\r\nhọa. Nguồn Internet Chào bạn, Một cơ thể khỏe mạnh bình thường thì\r\
\nkhông có , nếu có hạch, phải\r\nkiểm tra xem đó là hạch viêm đơn thuần hay là\
\ ác tính. Hơn nữa, chưa chắc khối\r\nbất thường mà bạn ghi nhận có phải thật\
\ sự là hạch hay không, mà có thể là khối\r\náp-xe từ nhiễm trùng da chẳng hạn.\
\ Vì thế, rất cần thiết đưa mẹ bạn đến\r\nBV đa khoa uy tín để kiểm tra và điều\
\ trị thích hợp kịp thời, bạn nhé. Thân mến! "
- source_sentence: Thuốc Flavital 500 DHT bổ gan thận, mạnh gân cốt (5 vỉ x 10 viên)
sentences:
- 'Nguy cơ u trong tim Những ai có nguy cơ mắc phải u trong tim? Chưa có nhiều nghiên
cứu về đối tượng nguy cơ của u tim nguyên phát (lành tính và ác tính) và ít được
đề cập trong các tài liệu. Một số loại u (như u nhầy) có tỉ lệ nữ mắc bệnh cao
hơn nam giới. Yếu tố làm tăng nguy cơ mắc phải u trong tim Một số yếu tố làm tăng
nguy cơ mắc U trong tim, bao gồm: Mắc các bệnh ung thư, đặc biệt trong giai đoạn
di căn (giai đoạn muộn) như ung thư phổi, ung thư vú, ung thư thực quản, tuyến
giáp, biểu mô thận... Nghiện thuốc lá; Lạm dụng bia rượu; Thường xuyên làm việc,
sinh hoạt trong môi trường ô nhiễm; Phơi nhiễm với tia bức xạ.'
- 'Mô tả ngắn:
Thuốc Taromentin là sản phẩm của Tarchomin Pharmaceutical Works "Polfa" S.A có
thành phần chính là Amoxicillin và Clavulanic acid dùng trong trường hợp viêm
xoang cạnh mũi và nhiễm trùng tai giữa; nhiễm trùng đường hô hấp; nhiễm trùng
đường tiết niệu; nhiễm trùng da và mô mềm, bao gồm nhiễm trùng răng miệng, nhiễm
trùng xương khớp.
Thành phần:
Amoxicillin: 875mg
Clavulanic acid: 125mg
Chỉ định:
Taromentin dùng được cho cả người lớn và trẻ em, trong điều trị các trường hợp
nhiễm trùng sau:
Viêm xoang cạnh mũi và nhiễm trùng tai giữa. Nhiễm trùng đường hô hấp. Nhiễm trùng
đường tiết niệu. Nhiễm trùng da và mô mềm, bao gồm nhiễm trùng răng miệng. Nhiễm
trùng xương khớp.'
- "Mô tả ngắn:\nThuốc Flavital 500 là sản phẩm được sản xuất bởi công ty cổ phần\
\ dược phẩm Hà Tây, thuốc có thành phần chính là cao khô hỗn hợp các dược liệu:\
\ Thỏ ty tử, hà thủ ô đỏ, dây đau xương, đỗ trọng, cốt toái bổ, cúc bất tử, nấm\
\ sò khô. Thuốc có công năng chính là: Bổ gan thận, mạnh gân cốt, tráng dương,\
\ ích tinh, nuôi dưỡng khí huyết ở người cao tuổi. \n Thuốc Flavital 500 được\
\ bào chế dưới dạng viên nang cứng. Thuốc được đóng gói theo quy cách hộp 5 vỉ\
\ x 10 viên.\nThành phần:\nThỏ ty tử: 25mg\nHà thủ ô đỏ: 25mg\nDây đau xương:\
\ 25mg\nĐỗ trọng: 25mg\nCốt toái bổ: 25mg\nCúc bất tử: 50mg\nNấm sò khô: 500mg\n\
Chỉ định:\nThuốc Flavital 500 được chỉ định dùng trong các trường hợp sau:\nTrong\
\ các trường hợp thận yếu (đau mỏi lưng, tiểu tiện đêm, giảm hoạt động sinh lý...).\
\ Tăng cường sức khỏe cho người cao tuổi (run rẩy, tê bì, suy kiệt...). Các triệu\
\ chứng suy giảm trí nhớ, đau đầu, mất ngủ. Khắc phục hội chứng tiền đình và phục\
\ hồi các tổn thương stress. Các biểu hiện rối loạn tuần hoàn não ở người cao\
\ tuổi như mất thăng bằng, hoa mắt..."
- source_sentence: Chào bác sĩ. Em đi khám ở bệnh viện đa khoa bác sĩ kết luận là
em bị đa nhân 2 thùy tuyến giáp tirads 3 kém đồng nhất 2 thùy tuyến giáp. Nang
thùy trái tuyến giáp vậy có nguy hiểm không ạ? Em hoang mang quá ạ.
sentences:
- Chào bạn, Tình trạng bệnh lý dạ dày của bạn khá xấu, bởi vì viêm dạ dày mà có
chuyển sản niêm mạc ruột vùng hang vị là tình trạng tiền ung thư, có nguy cơ tiến
triển đến ung thư nếu không điều trị sớm và đúng chuẩn. Tuy nhiên, bệnh vẫn chưa
đến mức là ung thư dạ dày. Do đó, bạn cần kiên trì theo dõi bệnh và điều trị bệnh
này với BS chuyên khoa Tiêu hóa, thuốc trọng yếu điều trị bệnh này là ức chế bơm
proton để giảm tiết acid dạ dày, các thuốc khác hỗ trợ điều trị triệu chứng đi
kèm nếu có (như đau bụng, đầy hơi, ợ chua…). Theo luật khám và chữa bệnh hiện
nay của Bộ Y tế, BS không được phép kê thuốc thông qua kênh truyền thông mà không
thông qua thăm khám + hỏi bệnh trực tiếp với người bệnh, điều này là do vấn đề
an toàn của người bệnh. Nếu muốn phối hợp các phương thức trị liệu Đông y như
nghệ, bài thuốc cổ truyền… thì phải thông báo với BS Tây y đang điều trị thuốc
cho bạn để tránh tương tác thuốc, quá liều thuốc. Song song đó, bạn cần hạn chế
ăn đồ chua cay, nước có gas, nhiều dầu mỡ, nhiều gia vị, café, bia rượu, không
hút thuốc lá và tránh căng thẳng đầu óc, ăn uống đúng giờ và nghỉ ngơi hợp lý.
Thân mến.
- "Hình minh họa.\r\nNguồn Internet Chào bạn, Nếu như đã có\r\ný định đi thì bạn\
\ nên đến các phòng khám có bác sĩ chuyên khoa Da liễu để\r\nđược tư vấn cụ thể\
\ hơn về vấn đề chăm sóc da và điều trị dứt điểm mụn bạn nhé.\r\nTránh việc đến\
\ các cơ sở không uy tín dễ để lại di chứng sẹo xấu cho da mặt. Thân mến! AloBacsi.com\
\ Cổng thông tin tư vấn sức khỏe miễn phí"
- 'Chào em, Phân loại Ti-rads là phân loại tiên lượng ác tính của nhân giáp trên
siêu âm. + TI-RADS-1: Mô giáp lành. + TI-RADS-2: Các tổn thương lành tính (0%
nguy cơ ác tính). + TI-RADS-3: Các tổn thương nhiều khả năng lành tính (1,7% ác
tính). + TI-RADS-4: 4a: Tổn thương có 1 dấu hiệu siêu âm nghi ngờ (3,3% ác tính).
4b: Tổn thương có 2 dấu hiệu siêu âm nghi ngờ (9,2% ác tính). 4c: Tổn thương có
3-4 dấu hiệu siêu âm nghi ngờ (44,4-72,4% ác tính). + TI-RADS-5: có từ 5 trở lên
dấu hiệu siêu âm nghi ngờ (87,5% ác tính). + TI-RADS-6: Biết chắc chắn bướu ác
tính trước đó. Theo kết quả mô tả siêu âm tuyến giáp của em thì hai nhân giáp
thùy phải và thùy trái của em được đánh giá là tirads 3 (phân độ nguy cơ ác tính
của nhân giáp), tức là nhiều khả năng lành tính và có 1,7% nguy cơ ác tính mà
thôi. Đối với nhân giáp tirads 3, nguy cơ ác tính thấp, thì việc quyết định nên
sinh thiết hay theo dõi sẽ cần dựa vào nhiều yếu tố khác, như nhân giáp này có
làm rối loạn hormone tuyến giáp hay không, tiền căn gia đình có ai bị ung thư
không, bản thân em có từng chiếu xạ vùng đầu mặt cổ không… Do đó, em cần khám
chuyên khoa Giáp, thuộc chuyên khoa Nội tiết hoặc chuyên khoa Ung bướu để bác
sĩ xem lại biên bản siêu âm, khai thác thêm những thông tin về bản thân và gia
đình em, làm thêm xét nghiệm nếu chưa đủ (như xét nghiệm kiểm tra chức năng tuyến
giáp) từ đó mới đưa ra kết luận về hướng xử trí phù hợp tiếp theo, em nhé.'
model-index:
- name: SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
results:
- task:
type: reranking
name: Reranking
dataset:
name: dev eval
type: dev-eval
metrics:
- type: map
value: 0.9982142857142857
name: Map
- type: mrr@1
value: 0.9964285714285714
name: Mrr@1
- type: ndcg@1
value: 0.9964285714285714
name: Ndcg@1
---
# SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) <!-- at revision ae06c001a2546bef168b9bf8f570ccb1a16aaa27 -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("meandyou200175/paraphrase-multilingual-MiniLM-L12-v2_finetune")
# Run inference
sentences = [
'Chào bác sĩ. Em đi khám ở bệnh viện đa khoa bác sĩ kết luận là em bị đa nhân 2 thùy tuyến giáp tirads 3 kém đồng nhất 2 thùy tuyến giáp. Nang thùy trái tuyến giáp vậy có nguy hiểm không ạ? Em hoang mang quá ạ.',
'Chào em, Phân loại Ti-rads là phân loại tiên lượng ác tính của nhân giáp trên siêu âm. + TI-RADS-1: Mô giáp lành. + TI-RADS-2: Các tổn thương lành tính (0% nguy cơ ác tính). + TI-RADS-3: Các tổn thương nhiều khả năng lành tính (1,7% ác tính). + TI-RADS-4: 4a: Tổn thương có 1 dấu hiệu siêu âm nghi ngờ (3,3% ác tính). 4b: Tổn thương có 2 dấu hiệu siêu âm nghi ngờ (9,2% ác tính). 4c: Tổn thương có 3-4 dấu hiệu siêu âm nghi ngờ (44,4-72,4% ác tính). + TI-RADS-5: có từ 5 trở lên dấu hiệu siêu âm nghi ngờ (87,5% ác tính). + TI-RADS-6: Biết chắc chắn bướu ác tính trước đó. Theo kết quả mô tả siêu âm tuyến giáp của em thì hai nhân giáp thùy phải và thùy trái của em được đánh giá là tirads 3 (phân độ nguy cơ ác tính của nhân giáp), tức là nhiều khả năng lành tính và có 1,7% nguy cơ ác tính mà thôi. Đối với nhân giáp tirads 3, nguy cơ ác tính thấp, thì việc quyết định nên sinh thiết hay theo dõi sẽ cần dựa vào nhiều yếu tố khác, như nhân giáp này có làm rối loạn hormone tuyến giáp hay không, tiền căn gia đình có ai bị ung thư không, bản thân em có từng chiếu xạ vùng đầu mặt cổ không… Do đó, em cần khám chuyên khoa Giáp, thuộc chuyên khoa Nội tiết hoặc chuyên khoa Ung bướu để bác sĩ xem lại biên bản siêu âm, khai thác thêm những thông tin về bản thân và gia đình em, làm thêm xét nghiệm nếu chưa đủ (như xét nghiệm kiểm tra chức năng tuyến giáp) từ đó mới đưa ra kết luận về hướng xử trí phù hợp tiếp theo, em nhé.',
'Hình minh họa.\r\nNguồn Internet Chào bạn, Nếu như đã có\r\ný định đi thì bạn nên đến các phòng khám có bác sĩ chuyên khoa Da liễu để\r\nđược tư vấn cụ thể hơn về vấn đề chăm sóc da và điều trị dứt điểm mụn bạn nhé.\r\nTránh việc đến các cơ sở không uy tín dễ để lại di chứng sẹo xấu cho da mặt. Thân mến! AloBacsi.com Cổng thông tin tư vấn sức khỏe miễn phí',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Reranking
* Dataset: `dev-eval`
* Evaluated with [<code>RerankingEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.RerankingEvaluator)
| Metric | Value |
|:--------|:-----------|
| **map** | **0.9982** |
| mrr@1 | 0.9964 |
| ndcg@1 | 0.9964 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 43,000 training samples
* Columns: <code>query</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | query | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 66.63 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 32 tokens</li><li>mean: 120.11 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 31 tokens</li><li>mean: 119.58 tokens</li><li>max: 128 tokens</li></ul> |
* Samples:
| query | positive | negative |
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Chào bác sĩ,
<br>
<br>Em là nam nhưng vú to làm em rất mặc cảm. Em đã đi khám bệnh, sau khi làm các xét nghiệm và chiếu chụp, bác sĩ xác định em phì đại tuyến vú.
<br>
<br>Theo bác sĩ em nên dùng thuốc (nội khoa) hơn hay phẫu thuật (ngoại khoa) hơn? Mỗi phương pháp có lợi hay hại gì ạ? Em băn khoăn lắm, rất mong được bác sĩ tư vấn. Em cảm ơn bác sĩ!</code> | <code>Hùng thân mến, Vú to nam giới là tình trạng phì đại tuyến vú , thường không đối xứng hoặc một bên và có thể có mật độ mềm. Chẩn đoán phân biệt với ung thư vú và vú to nam giới giả, thường thấy ở nam giới béo phì và được đặc trưng bởi lắng đọng mô mỡ mà không có tăng sinh tuyến. Nguyên nhân là do tăng hoạt động của estrogen hoặc tăng tỷ số estrogen-androgen Điều trị phụ thuộc vào nguyên nhân gây vú to nam giới, phẫu thuật hay điều trị nội khoa phù hợp với các trường hợp khác nhau. Nội khoa điều trị hormon thay thế sẽ làm cải thiện vú to nam giới ở bệnh nhân nam suy sinh dục. Vú to nam giới do thuốc thì ngưng thuốc, vú to nam giới do dậy thì thì theo dõi. Phẫu thuật được chỉ định nếu có các vấn đề tâm lý, thẩm mỹ trường hợp nhu mô vú tiếp tục phát triển, ác tính. Trân trọng.</code> | <code>Tại Việt Nam, chưa ban hành tiêm vắc xin mũi 3 cho người dân. Chào em, Hiện tại ở nước ta chưa có ban hành chỉ thị tiêm mũi thứ 3 vắc xin phòng COVID, vì còn rất nhiều người dân trên cả nước chưa được tiêm chủng mũi vắc xin phòng COVID nào cả, vì chúng ta đang thiếu vắc xin. Vắc xin phòng COVID cho đến hiện nay toàn bộ là miễn phí và phân đồng đều cho tất cả người dân theo chỉ thị của bộ y tế, chưa có cơ sở y tế nào được phép mua bán tiêm vắc xin theo yêu cầu riêng của khách hàng. Vì thế, việc kê khai gian dối tìm cách tiêm mũi thứ 3 vắc xin phòng COVID nếu bị phát hiện có thể bị truy tố pháp luật sau này. Ở các nước phát triển, nguồn vắc xin dự trữ đầy đủ thì họ đã triển khai tiêm mũi thứ 3 vắc xin phòng COVID cho tất cả người dân sau 6-9 tháng từ thời điểm tiêm mũi 2. Cách phối trộn vắc xin mũi thứ 2 cũng tương tự như mũi thứ 3, đó là Astra mũi đầu thì có thể theo sau là Astra hoặc Moderna hoặc Pfizer. Thân mến!</code> |
| <code>Thuốc xịt mũi Thái Dương điều trị hắt hơi, sổ mũi, nghẹt mũi (20ml)</code> | <code>Mô tả ngắn:<br>Thuốc Xịt Mũi Thái Dương là sản phẩm được sản xuất bởi Công ty Cổ phần Sao Thái Dương, thuốc có thành phần chinh là Nghệ vàng ( Rhizoma Curcuma longae ), Menthol ( Mentholum ), Camphor ( Comphora ), được dùng trong các trường hợp: hắt hơi liên tục nhiều lần không dứt, sổ mũi, nghẹt mũi, ứ đọng dịch đờm nhầy trong xoang mũi, xoang trán...mỗi khi thay đổi thời tiết hay hít phải bụi nhà, phấn hoa, mùi lạ...; ngứa mũi, khô mũi, sổ mũi, cảm giác khó chịu ở mũi, viêm mũi do cảm cúm... <br> Thuốc Xịt Mũi Thái Dương được bào chế dưới dạng chất lỏng màu vàng, mùi thơm tinh dầu, pH 5-7. Hộp 1 lọ x 20 ml.<br>Thành phần:<br>Nghệ: 2<br>Menthol: 20<br>DL-camphor: 20mg<br>Chỉ định:<br>Thuốc Xịt Mũi Thái Dương được chỉ định dùng trong các trường hợp sau:<br>Hắt hơi liên tục nhiều lần không dứt, sổ mũi, nghẹt mũi, ứ đọng dịch đờm nhầy trong xoang mũi, xoang trán...mỗi khi thay đổi thời tiết hay hít phải bụi nhà, phấn hoa, mùi lạ...<br>Ngứa mũi, khô mũi, sổ mũi, cảm giác khó chịu ở mũi, viêm mũi do cảm cúm...</code> | <code>Mô tả ngắn:<br>Triplixam của Công ty Servier, Ireland, thành phần chính perindopril, indapamid và amlodipin; là nhóm thuốc điều trị tăng huyết áp, có các dụng hạ huyết áp; được sử dụng trong điều trị thay thế cho bệnh nhân đã được kiểm soát huyết áp khi kết hợp perindopril/indapamid và amlodipin có cùng hàm lượng. <br> Thuốc được bào chế dưới dạng viên nén bao phim, màu trắng, đựng trong hộp chứa 30 viên.<br>Thành phần:<br>Amlodipine: 10mg<br>Indapamide: 2.5mg<br>Perindopril: 5mg<br>Chỉ định:<br>Thuốc Triplixam được chỉ định dùng trong các trường hợp sau: Điều trị thay thế trong điều trị tăng huyết áp cho bệnh nhân đã được kiểm soát huyết áp khi kết hợp perindopril/indapamid và amlodipin có cùng hàm lượng.</code> |
| <code>Khoảng 1 tuần nay chân em bị nổi những mụn nhỏ li ti rất ngứa và lây lan, có mụn mềm, có mụn có mài ngay đầu mụn. 3 ngày đầu còn lây sang vùng cánh tay và bụng (nhưng không nhiều), ngay cả vết trầy xước nhỏ ở đầu gối cũng lâu lành.
<br>
<br>Em có đi nhà thuốc và được tư vấn thoa kem Mật Ong Madeleine Ritchie nhưng không thấy hiệu quả. Hiện tại em đang thoa kem Beprosone nhưng cũng không thấy cải thiện nhiều.
<br>
<br>Em không bị côn trùng đốt cũng nhưng không sử dụng mỹ phẩm gì cả, nên không hiểu sao lại bị như vậy. BS có thể tư vấn cho em thuốc thoa đồng thời trị thâm không ạ? Em sợ sẽ để lại thâm rất xấu nên lo lắng. Đây là những loại kem thoa em đã sử dụng nhưng không thấy hiệu quả. Chân thành cảm ơn BS.
<br>
<br>(Bạn đọc Nguyễn Lê Thanh Tâm)</code> | <code> Chào em, Em chụp hình những tuýp kem đã dùng nhưng không đưa kèm hình sang thương nên rất khó cho bác sĩ để chẩn đoán bệnh của em là gì và không thể kê toa cho em trong lúc này. Nếu được em vui lòng cung cấp hình ảnh cho chương trình. Em cũng không nên tự ý bôi thuốc vì có thể không điều trị được bệnh mà còn gây ra nhiều tác dụng phụ ảnh hưởng sức khỏe. Nếu quá lo lắng, em nên đến gặp bác sĩ chuyên khoa Da Liễu để được thăm khám trực tiếp và chỉ định xét nghiệm cần thiết để chẩn đoán bệnh em nhé! Thân mến! </code> | <code>Chào em, Khi bệnh ung thư gan giai đoạn cuối hay các bệnh ung thư giai đoạn cuối khác thì khả năng sống tiếp là rất thấp. Những thống kê tại bệnh viện cho thấy, bệnh nhân ung thư giai đoạn cuối chỉ sống được khoảng 6 đến 8 tháng, nhanh chỉ trong 1 tháng. Thường các phương pháp điều trị ung thư gan chủ yếu là hóa trị, xạ trị hoặc phẫu thuật cắt bỏ khối u, tiêm Ethanol hoặc nhiệt RFA. Tuy nhiên, đối với ung thư giai đoạn cuối thường không có một phương pháp nào trong y học có thể chữa khỏi bệnh, bởi bệnh đã di căn và lây sang các cơ quan khác của cơ thể như mạch máu, phổi, thận… Thường bác sĩ sẽ cố gắng kiểm soát khối u và kéo dài sự sống cho người bệnh bằng hóa trị và xạ trị, đồng thời cải thiện sức khỏe và động viên tinh thần cho bệnh nhân. Thân mến.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 7,000 evaluation samples
* Columns: <code>query</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | query | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 65.65 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 41 tokens</li><li>mean: 119.4 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 31 tokens</li><li>mean: 119.51 tokens</li><li>max: 128 tokens</li></ul> |
* Samples:
| query | positive | negative |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Thuốc Esseil-10 Davipharm điều trị tăng huyết áp (10 vỉ x 10 viên)</code> | <code>Mô tả ngắn:<br>Thuốc Esseil-10 là sản phẩm được sản xuất bởi Công ty Cổ phần Dược phẩm Đạt Vi Phú. Thuốc có thành phần chính là cilnidipin, được chỉ định để điều trị tăng huyết áp. <br> Thuốc Esseil-10 được bào chế dưới dạng viên nén tròn, bao phim màu vàng, một mặt có dập logo, mặt kia có dập gạch ngang và được đóng gói theo quy cách hộp 10 vỉ x 10 viên.<br>Thành phần:<br>Cilnidipine: 10mg<br>Chỉ định:<br>Thuốc Esseil-10 được chỉ định dùng trong các trường hợp sau: Cilnidipin được chỉ định để điều trị tăng huyết áp.</code> | <code>Mô tả ngắn:<br>Thuốc Althax là sản phẩm của Mediplantex chứa hoạt chất Thymomodulin có tác dụng hỗ trợ dự phòng tái phát nhiễm khuẩn hô hấp ở trẻ em và người lớn; viêm mũi dị ứng, dự phòng tái phát dị ứng thức ăn; cải thiện các triệu chứng lâm sàng ở bệnh nhân HIV/AIDS và tăng cường hệ miễn dịch đã suy giảm ở người cao tuổi.<br>Thành phần:<br>Thymomodulin: 120mg<br>Chỉ định:<br>Thuốc Althax được chỉ định dùng trong các trường hợp sau:<br>Hỗ trợ dự phòng tái phát nhiễm khuẩn hô hấp ở trẻ em và người lớn. Hỗ trợ điều trị viêm mũi dị ứng , dự phòng tái phát dị ứng thức ăn. Hỗ trợ cải thiện các triệu chứng lâm sàng ở bệnh nhân HIV/AIDS . Hỗ trợ tăng cường hệ miễn dịch đã suy giảm ở người cao tuổi.</code> |
| <code>Thưa BS,
<br>
<br>Con là nữ, năm nay 14 tuổi. Dạo gần đây do uống thuốc nhiều con hay bị táo bón. Con có tìm hiểu sơ về bệnh trĩ, nhưng khi đi đại tiện con không bị chảy máu gì cả. Gần đây con có cảm giác hơi vướng ở hậu môn, không hẳn là ở hậu môn, cứ như ở ngoài phía 2 bên mép thôi.
<br>
<br>Cứ thi thoảng là bị rồi hình như tự hết thì phải, có cảm giác khi đi ngoài chưa đi hết và thi thoảng thấy như có vật gì nhỏ lòi ra ngoài nhưng tự cơ thể có thể đẩy vô được bình thường. Dấu hiệu như vậy giống với bệnh nào nhất vậy BS? Con cảm ơn BS.</code> | <code> Chào em, Theo thông tin em cung cấp, nhiều khả năng em có , nhưng ở mức độ nhẹ khoảng trĩ độ 1 mà thôi. Cảm giác đi cầu chưa hết thường là do khối phân tròn nhỏ sót lại, khó xuất ra do mô xung quanh hậu môn đã phù nề kèm với búi trĩ sau khi em đã cố rặn một lúc lâu, cũng có thể đó chính là búi trĩ nội. Vì thế em không cần và không nên ráng rặn tiếp để xuất hết khối phân này, ở lần đi tiêu tiếp theo sẽ tự khắc thải ra được. Tình trạng này thì không cần dùng thuốc, chỉ cần thay đổi lối sống bằng cách ăn nhiều rau xanh, trái cây, hạn chế thực phẩm cay, nhiều dầu mỡ, uống nhiều nước, tối thiểu phải 2-3 lít nước mỗi ngày, không ngồi lâu trên 5 phút khi đi vệ sinh, tập thể dục đều đặn sẽ giúp em cải thiện tình trạng này, em nhé. Thân mến!</code> | <code>Nếu bạn là phụ nữ nhiễm virus viêm gan B, khi có thai bạn nên đi khám đúng chuyên khoa Trong bệnh nhiễm virus viêm gan B thì có những thể lâm sàng sau: Nhiễm virus viêm gan B thể không hoạt động: tế bào gan không bị tổn thương, men gan không tăng, không cần điều trị, chỉ theo dõi định kỳ men gan, tầm sóat ung thư gan (AFP, siêu âm bụng) và sống lối sống lành mạnh, hạn chế các chất độc gan (bia rượu, thuốc uống bừa bãi, thuốc đông nam không rõ loại). Viêm gan B, tức là vừa nhiễm virus viêm gan B và virus này đang làm tổn thương gan, men gan sẽ tăng. Viêm gan B được chia thành viêm gan B cấp và viêm gan B mạn tính. Viêm gan B cấp tính là tổn thương tế bào gan cấp do nhiễm HBV, hiện tượng tế bào gan bị phá hủy không phải do virus mà do chính cơ thể chống lại virus gây ra, và trong vòng 6 tháng là cơ thể sẽ thải toàn bộ virus và tạo miễn dịch bảo vệ suốt đời. Viêm gan B mạn là cơ thể không thể tự thải trừ HBV ra khỏi cơ thể sau 6 tháng. BS không rõ em thuộc thể nào, có đang dùng thuốc đặc trị viêm gan B hay không, nhưng nhìn chung bây giờ bệnh này nước ta kiểm soát lây truyền từ mẹ sang con rất là tốt, có nhiều chiến lược cho từng trường hợp khác nhau: mẹ nhiễm virus viêm gan B thể không hoạt động hay mẹ viêm gan B mạn đang dùng thuốc dự định có con, mẹ viêm gan B mạn đang dùng thuốc và vô tình phát hiện co con. Do đó, vợ chồng em nay muốn có em bé thì nên đến khám tại chuyên khoa gan mật trước, trình bày ý định này của mình để bs kiểm tra lại tổng quát cho em, tùy tình huống mà sẽ có hướng dẫn cụ thể riêng, em nhé.</code> |
| <code>Thuốc Fexet Getz điều trị viêm mũi dị ứng, mày đay tự phát mãn tính (2 vỉ x 5 viên)</code> | <code>Mô tả ngắn:<br>Fexet 120 mg có thành phần chính fexofenadine, do công ty Getzpharma sản xuất, được dùng để điều trị làm giảm các triệu chứng có liên quan đến bệnh viêm mũi dị ứng theo mùa và nổi mề đay tự phát mãn tính.<br>Thành phần:<br>Fexofenadine: 120mg<br>Chỉ định:<br>Thuốc Fexet 120 mg được chỉ định dùng trong các trường hợp sau:<br>Ðiều trị làm giảm các triệu chứng có liên quan đến bệnh:<br>Viêm mũi dị ứng theo mùa bao gồm hắt hơi, sổ mũi , ngứa mũi, miệng, cổ họng, chảy nước mắt, đỏ mắt. Nổi mề đay tự phát mãn tính.</code> | <code>Mô tả ngắn:<br>Hoạt Huyết Dưỡng Não được phân phối bởi Công ty cổ phần dược phẩm Xanh, thành phần chính là cao khô lá bạch quả, cao khô rễ đinh lăng, là thuốc dùng trong trường hợp đau đầu, chóng mặt, giảm trí nhớ; thiểu năng tuần hoàn não, ù tai, giảm thính lực. Ngoài ra, thuốc còn được dùng trong chứng đau cách hồi do tắc động mạch chi dưới mãn tính, hội chứng Raynaud và chứng nhược dương.<br>Thành phần:<br>Bạch quả: 80mg<br>Đinh lăng: 75mg<br>Chỉ định:<br>Thuốc Hoạt Huyết Dưỡng Não được chỉ định dùng trong các trường hợp sau:<br>Đau đầu , chóng mặt, giảm trí nhớ. Thiểu năng tuần hoàn não, ù tai , giảm thính lực. Dùng trong chứng đau cách hồi do tắc động mạch chi dưới mãn tính, hội chứng Raynaud và chứng nhược dương.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 5
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | Validation Loss | dev-eval_map |
|:------:|:-----:|:-------------:|:---------------:|:------------:|
| 0 | 0 | - | - | 0.9784 |
| 0.0372 | 100 | 1.102 | - | - |
| 0.0744 | 200 | 0.7679 | - | - |
| 0.1116 | 300 | 0.5825 | - | - |
| 0.1488 | 400 | 0.5424 | - | - |
| 0.1860 | 500 | 0.5088 | - | - |
| 0.2232 | 600 | 0.4052 | - | - |
| 0.2604 | 700 | 0.4012 | - | - |
| 0.2976 | 800 | 0.3834 | - | - |
| 0.3348 | 900 | 0.3688 | - | - |
| 0.3720 | 1000 | 0.3395 | 0.3014 | 0.9954 |
| 0.4092 | 1100 | 0.3401 | - | - |
| 0.4464 | 1200 | 0.3096 | - | - |
| 0.4836 | 1300 | 0.3438 | - | - |
| 0.5208 | 1400 | 0.2635 | - | - |
| 0.5580 | 1500 | 0.3225 | - | - |
| 0.5952 | 1600 | 0.3069 | - | - |
| 0.6324 | 1700 | 0.2943 | - | - |
| 0.6696 | 1800 | 0.2819 | - | - |
| 0.7068 | 1900 | 0.2679 | - | - |
| 0.7440 | 2000 | 0.2646 | 0.2357 | 0.9964 |
| 0.7812 | 2100 | 0.2487 | - | - |
| 0.8185 | 2200 | 0.2254 | - | - |
| 0.8557 | 2300 | 0.2623 | - | - |
| 0.8929 | 2400 | 0.2399 | - | - |
| 0.9301 | 2500 | 0.2206 | - | - |
| 0.9673 | 2600 | 0.2299 | - | - |
| 1.0045 | 2700 | 0.2218 | - | - |
| 1.0417 | 2800 | 0.2163 | - | - |
| 1.0789 | 2900 | 0.206 | - | - |
| 1.1161 | 3000 | 0.2099 | 0.1937 | 0.9976 |
| 1.1533 | 3100 | 0.2116 | - | - |
| 1.1905 | 3200 | 0.2027 | - | - |
| 1.2277 | 3300 | 0.1779 | - | - |
| 1.2649 | 3400 | 0.1686 | - | - |
| 1.3021 | 3500 | 0.1675 | - | - |
| 1.3393 | 3600 | 0.1487 | - | - |
| 1.3765 | 3700 | 0.141 | - | - |
| 1.4137 | 3800 | 0.1363 | - | - |
| 1.4509 | 3900 | 0.133 | - | - |
| 1.4881 | 4000 | 0.1357 | 0.1823 | 0.9977 |
| 1.5253 | 4100 | 0.1008 | - | - |
| 1.5625 | 4200 | 0.1249 | - | - |
| 1.5997 | 4300 | 0.1258 | - | - |
| 1.6369 | 4400 | 0.121 | - | - |
| 1.6741 | 4500 | 0.108 | - | - |
| 1.7113 | 4600 | 0.112 | - | - |
| 1.7485 | 4700 | 0.0988 | - | - |
| 1.7857 | 4800 | 0.0998 | - | - |
| 1.8229 | 4900 | 0.1031 | - | - |
| 1.8601 | 5000 | 0.1097 | 0.1697 | 0.9981 |
| 1.8973 | 5100 | 0.1025 | - | - |
| 1.9345 | 5200 | 0.0877 | - | - |
| 1.9717 | 5300 | 0.101 | - | - |
| 2.0089 | 5400 | 0.0963 | - | - |
| 2.0461 | 5500 | 0.083 | - | - |
| 2.0833 | 5600 | 0.0842 | - | - |
| 2.1205 | 5700 | 0.0861 | - | - |
| 2.1577 | 5800 | 0.0999 | - | - |
| 2.1949 | 5900 | 0.0972 | - | - |
| 2.2321 | 6000 | 0.0859 | 0.1635 | 0.998 |
| 2.2693 | 6100 | 0.0769 | - | - |
| 2.3065 | 6200 | 0.0778 | - | - |
| 2.3438 | 6300 | 0.0684 | - | - |
| 2.3810 | 6400 | 0.0623 | - | - |
| 2.4182 | 6500 | 0.0636 | - | - |
| 2.4554 | 6600 | 0.0647 | - | - |
| 2.4926 | 6700 | 0.0586 | - | - |
| 2.5298 | 6800 | 0.0464 | - | - |
| 2.5670 | 6900 | 0.0587 | - | - |
| 2.6042 | 7000 | 0.0617 | 0.1560 | 0.9984 |
| 2.6414 | 7100 | 0.0618 | - | - |
| 2.6786 | 7200 | 0.0453 | - | - |
| 2.7158 | 7300 | 0.0687 | - | - |
| 2.7530 | 7400 | 0.0434 | - | - |
| 2.7902 | 7500 | 0.0447 | - | - |
| 2.8274 | 7600 | 0.0508 | - | - |
| 2.8646 | 7700 | 0.0554 | - | - |
| 2.9018 | 7800 | 0.0459 | - | - |
| 2.9390 | 7900 | 0.0478 | - | - |
| 2.9762 | 8000 | 0.0449 | 0.1494 | 0.9981 |
| 3.0134 | 8100 | 0.0505 | - | - |
| 3.0506 | 8200 | 0.0484 | - | - |
| 3.0878 | 8300 | 0.0382 | - | - |
| 3.125 | 8400 | 0.0496 | - | - |
| 3.1622 | 8500 | 0.0513 | - | - |
| 3.1994 | 8600 | 0.051 | - | - |
| 3.2366 | 8700 | 0.0474 | - | - |
| 3.2738 | 8800 | 0.0382 | - | - |
| 3.3110 | 8900 | 0.0412 | - | - |
| 3.3482 | 9000 | 0.0294 | 0.1493 | 0.9983 |
| 3.3854 | 9100 | 0.0325 | - | - |
| 3.4226 | 9200 | 0.036 | - | - |
| 3.4598 | 9300 | 0.0371 | - | - |
| 3.4970 | 9400 | 0.0357 | - | - |
| 3.5342 | 9500 | 0.0285 | - | - |
| 3.5714 | 9600 | 0.0289 | - | - |
| 3.6086 | 9700 | 0.0331 | - | - |
| 3.6458 | 9800 | 0.0378 | - | - |
| 3.6830 | 9900 | 0.0249 | - | - |
| 3.7202 | 10000 | 0.0402 | 0.1478 | 0.9981 |
| 3.7574 | 10100 | 0.0298 | - | - |
| 3.7946 | 10200 | 0.0281 | - | - |
| 3.8318 | 10300 | 0.0271 | - | - |
| 3.8690 | 10400 | 0.0301 | - | - |
| 3.9062 | 10500 | 0.0274 | - | - |
| 3.9435 | 10600 | 0.023 | - | - |
| 3.9807 | 10700 | 0.0239 | - | - |
| 4.0179 | 10800 | 0.0259 | - | - |
| 4.0551 | 10900 | 0.0294 | - | - |
| 4.0923 | 11000 | 0.0233 | 0.1483 | 0.9983 |
| 4.1295 | 11100 | 0.033 | - | - |
| 4.1667 | 11200 | 0.0337 | - | - |
| 4.2039 | 11300 | 0.027 | - | - |
| 4.2411 | 11400 | 0.0262 | - | - |
| 4.2783 | 11500 | 0.0243 | - | - |
| 4.3155 | 11600 | 0.028 | - | - |
| 4.3527 | 11700 | 0.019 | - | - |
| 4.3899 | 11800 | 0.0187 | - | - |
| 4.4271 | 11900 | 0.0222 | - | - |
| 4.4643 | 12000 | 0.0227 | 0.1416 | 0.9981 |
| 4.5015 | 12100 | 0.0213 | - | - |
| 4.5387 | 12200 | 0.0183 | - | - |
| 4.5759 | 12300 | 0.0223 | - | - |
| 4.6131 | 12400 | 0.0205 | - | - |
| 4.6503 | 12500 | 0.0229 | - | - |
| 4.6875 | 12600 | 0.0172 | - | - |
| 4.7247 | 12700 | 0.0272 | - | - |
| 4.7619 | 12800 | 0.0157 | - | - |
| 4.7991 | 12900 | 0.0161 | - | - |
| 4.8363 | 13000 | 0.015 | 0.1414 | 0.9982 |
| 4.8735 | 13100 | 0.0196 | - | - |
| 4.9107 | 13200 | 0.0179 | - | - |
| 4.9479 | 13300 | 0.0196 | - | - |
| 4.9851 | 13400 | 0.015 | - | - |
</details>
### Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.2.0
- Transformers: 4.45.1
- PyTorch: 2.4.0
- Accelerate: 0.34.2
- Datasets: 3.0.1
- Tokenizers: 0.20.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
BroAlanTaps/GPT2-large-4-52000steps
|
BroAlanTaps
| 2024-10-18T03:50:22Z | 136 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-18T03:48:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ShuoGZ/llama-3.2-1B-Instruct-abliterated
|
ShuoGZ
| 2024-10-18T03:48:50Z | 137 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-18T03:00:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
cuongdev/testtonfhop
|
cuongdev
| 2024-10-18T03:46:19Z | 29 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-10-18T03:41:28Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### testtonfhop Dreambooth model trained by cuongdev with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.