modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-28 18:27:08
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 501
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-28 18:25:37
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ConvLLaVA/ConvLLaVA-sft-1024 | ConvLLaVA | 2024-05-28T08:32:29Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"llava",
"text-generation",
"dataset:liuhaotian/LLaVA-Instruct-150K",
"arxiv:2405.15738",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T17:18:22Z | ---
datasets:
- liuhaotian/LLaVA-Instruct-150K
---
# ConvLLaVA Model Card
## Model details
**Model type:** ConvLLaVA is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture. Base LLM: lmsys/vicuna-7b-v1.5
**Model date:** ConvLLaVA-1024 was trained in March 2024.
Paper or resources for more information: https://github.com/alibaba/conv-llava/
## License
Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.
Where to send questions or comments about the model: https://github.com/alibaba/conv-llava/issues
## Intended use
**Primary intended uses:** The primary use of ConvLLaVA is research on large multimodal models and chatbots.
**Primary intended users:** The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
## Training dataset
- 1.2M ShareGPT4V-PT caption data.
- 100K ShareGPT4V caption data.
- 1.4M ALLaVA caption and instruction data.
- 186K VFLAN multitask data.
- 158K GPT-generated multimodal instruction-following data.
- 500K academic-task-oriented VQA data mixture.
- 40K ShareGPT data.
## Paper
arxiv.org/abs/2405.15738
|
lgk03/WITHINAPPS_NDD-claroline_test-content_tags | lgk03 | 2024-05-28T08:32:28Z | 106 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-28T08:16:40Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: WITHINAPPS_NDD-claroline_test-content_tags
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# WITHINAPPS_NDD-claroline_test-content_tags
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0456
- Accuracy: 0.9871
- F1: 0.9872
- Precision: 0.9878
- Recall: 0.9871
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 0.9978 | 111 | 0.0464 | 0.9871 | 0.9872 | 0.9878 | 0.9871 |
| No log | 1.9955 | 222 | 0.0456 | 0.9871 | 0.9872 | 0.9878 | 0.9871 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
ConvLLaVA/ConvLLaVA-sft-768 | ConvLLaVA | 2024-05-28T08:32:19Z | 16 | 1 | transformers | [
"transformers",
"pytorch",
"llava",
"text-generation",
"dataset:liuhaotian/LLaVA-Instruct-150K",
"arxiv:2405.15738",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T14:49:07Z | ---
datasets:
- liuhaotian/LLaVA-Instruct-150K
---
# ConvLLaVA Model Card
## Model details
**Model type:** ConvLLaVA is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture. Base LLM: lmsys/vicuna-7b-v1.5
**Model date:** ConvLLaVA-768 was trained in March 2024.
Paper or resources for more information: https://github.com/alibaba/conv-llava/
## License
Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.
Where to send questions or comments about the model: https://github.com/alibaba/conv-llava/issues
## Intended use
**Primary intended uses:** The primary use of ConvLLaVA is research on large multimodal models and chatbots.
**Primary intended users:** The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
## Training dataset
- 1.2M ShareGPT4V-PT caption data.
- 100K ShareGPT4V caption data.
- 1.4M ALLaVA caption and instruction data.
- 186K VFLAN multitask data.
- 158K GPT-generated multimodal instruction-following data.
- 500K academic-task-oriented VQA data mixture.
- 40K ShareGPT data.
## Paper
arxiv.org/abs/2405.15738 |
ConvLLaVA/ConvLLaVA-pretrain-1536 | ConvLLaVA | 2024-05-28T08:31:38Z | 13 | 2 | transformers | [
"transformers",
"pytorch",
"llava",
"text-generation",
"dataset:Lin-Chen/ShareGPT4V",
"dataset:FreedomIntelligence/ALLaVA-4V",
"dataset:Vision-Flan/vision-flan_191-task_1k",
"arxiv:2405.15738",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-25T08:35:38Z | ---
datasets:
- Lin-Chen/ShareGPT4V
- FreedomIntelligence/ALLaVA-4V
- Vision-Flan/vision-flan_191-task_1k
---
# ConvLLaVA Model Card
## Model details
**Model type:** ConvLLaVA is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture. Base LLM: lmsys/vicuna-7b-v1.5
**Model date:** ConvLLaVA-pretrain-1536 was trained in March 2024.
Paper or resources for more information: https://github.com/alibaba/conv-llava/
## License
Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.
Where to send questions or comments about the model: https://github.com/alibaba/conv-llava/issues
## Intended use
**Primary intended uses:** The primary use of ConvLLaVA is research on large multimodal models and chatbots.
**Primary intended users:** The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
## Training dataset
- 1.2M ShareGPT4V-PT caption data.
- 100K ShareGPT4V caption data.
- 1.4M ALLaVA caption and instruction data.
- 186K VFLAN multitask data.
- 158K GPT-generated multimodal instruction-following data.
- 500K academic-task-oriented VQA data mixture.
- 40K ShareGPT data.
## Paper
arxiv.org/abs/2405.15738
|
DaichiT/door_adjuster | DaichiT | 2024-05-28T08:31:28Z | 29 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:stabilityai/stable-diffusion-2",
"base_model:finetune:stabilityai/stable-diffusion-2",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-05-28T08:24:00Z | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
base_model: stabilityai/stable-diffusion-2
inference: true
instance_prompt: a photo of sks door_adjuster
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - DaichiT/door_adjuster
This is a dreambooth model derived from stabilityai/stable-diffusion-2. The weights were trained on a photo of sks door_adjuster using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
tvlife/Llama-3-Open-Ko-8B-Instruct-tvlife | tvlife | 2024-05-28T08:31:15Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:beomi/Llama-3-Open-Ko-8B",
"base_model:finetune:beomi/Llama-3-Open-Ko-8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-28T08:27:02Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: beomi/Llama-3-Open-Ko-8B
---
# Uploaded model
- **Developed by:** tvlife
- **License:** apache-2.0
- **Finetuned from model :** beomi/Llama-3-Open-Ko-8B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
DiederikMartens/eBERT_sa_cv_9_fold1 | DiederikMartens | 2024-05-28T08:30:48Z | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-28T08:08:43Z | ---
license: apache-2.0
base_model: google-bert/bert-base-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: eBERT_sa_cv_9_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eBERT_sa_cv_9_fold1
This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5401
- F1: 0.5989
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 325 | 0.5491 | 0.4553 |
| 0.6277 | 2.0 | 650 | 0.5053 | 0.5024 |
| 0.6277 | 3.0 | 975 | 0.5401 | 0.5989 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
zmilczarek/pii-detection-roberta-v3 | zmilczarek | 2024-05-28T08:30:38Z | 166 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-05-28T08:29:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-580978 | fine-tuned | 2024-05-28T08:30:26Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"Social Media",
"Arguments",
"Debate",
"Opinions",
"Perspectives",
"en",
"dataset:fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-580978",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-28T08:29:57Z | ---
license: apache-2.0
datasets:
- fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-580978
- allenai/c4
language:
- en
- en
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- Social Media
- Arguments
- Debate
- Opinions
- Perspectives
---
This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case:
counter arguments on social media impact
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-580978',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
Rizwan313/MiniCPM-Llama3-V-2_5-int4 | Rizwan313 | 2024-05-28T08:29:13Z | 107 | 0 | transformers | [
"transformers",
"safetensors",
"minicpmv",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"4-bit",
"bitsandbytes",
"region:us"
] | feature-extraction | 2024-05-28T08:25:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DiederikMartens/mBERT_sa_cv_9_fold1 | DiederikMartens | 2024-05-28T08:29:01Z | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-28T08:07:55Z | ---
license: apache-2.0
base_model: google-bert/bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: mBERT_sa_cv_9_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mBERT_sa_cv_9_fold1
This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7432
- F1: 0.2851
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 325 | 0.7432 | 0.2851 |
| 0.7484 | 2.0 | 650 | 0.7382 | 0.2851 |
| 0.7484 | 3.0 | 975 | 0.7363 | 0.2851 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Newton7/MyDrive | Newton7 | 2024-05-28T08:28:58Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2024-05-28T08:28:56Z | ---
license: llama3
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: meta-llama/Meta-Llama-3-8B-Instruct
model-index:
- name: MyDrive
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MyDrive
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
DiederikMartens/tsBERT_sa_cv_9_fold1 | DiederikMartens | 2024-05-28T08:28:35Z | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:igorsterner/german-english-code-switching-bert",
"base_model:finetune:igorsterner/german-english-code-switching-bert",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-28T08:07:30Z | ---
license: mit
base_model: igorsterner/german-english-code-switching-bert
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: tsBERT_sa_cv_9_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tsBERT_sa_cv_9_fold1
This model is a fine-tuned version of [igorsterner/german-english-code-switching-bert](https://huggingface.co/igorsterner/german-english-code-switching-bert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5209
- F1: 0.6927
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 325 | 0.3735 | 0.5700 |
| 0.4319 | 2.0 | 650 | 0.4329 | 0.6771 |
| 0.4319 | 3.0 | 975 | 0.5209 | 0.6927 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Mustain/finetuned-llama-3-8b-Instruct-bnb-4bit-NS-dataset | Mustain | 2024-05-28T08:27:25Z | 6 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-28T08:11:51Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** Mustain
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-866232 | fine-tuned | 2024-05-28T08:27:19Z | 8 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"Science",
"English",
"Research",
"Education",
"Literature",
"en",
"dataset:fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-866232",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-28T08:26:49Z | ---
license: apache-2.0
datasets:
- fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-866232
- allenai/c4
language:
- en
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- Science
- English
- Research
- Education
- Literature
---
This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case:
general domain
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-866232',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
fine-tuned/NFCorpus-512-192-gpt-4o-2024-05-13-43315 | fine-tuned | 2024-05-28T08:25:32Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"News",
"Articles",
"Journalism",
"Media",
"Current Events",
"en",
"dataset:fine-tuned/NFCorpus-512-192-gpt-4o-2024-05-13-43315",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-28T08:25:03Z | ---
license: apache-2.0
datasets:
- fine-tuned/NFCorpus-512-192-gpt-4o-2024-05-13-43315
- allenai/c4
language:
- en
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- News
- Articles
- Journalism
- Media
- Current Events
---
This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case:
news articles
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/NFCorpus-512-192-gpt-4o-2024-05-13-43315',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
DaichiT/counterweight | DaichiT | 2024-05-28T08:24:08Z | 31 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:stabilityai/stable-diffusion-2",
"base_model:finetune:stabilityai/stable-diffusion-2",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-05-28T08:16:07Z | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
base_model: stabilityai/stable-diffusion-2
inference: true
instance_prompt: a photo of sks countetweight
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - DaichiT/counterweight
This is a dreambooth model derived from stabilityai/stable-diffusion-2. The weights were trained on a photo of sks countetweight using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
rlaorrn/jeju_stt_v2 | rlaorrn | 2024-05-28T08:24:03Z | 99 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"ko",
"dataset:rlaorrn/working",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-27T12:19:24Z | ---
language:
- ko
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
base_model: openai/whisper-base
datasets:
- rlaorrn/working
model-index:
- name: jeju_stt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# jeju_stt
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the jeju_audio dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3820
- Cer: 12.0409
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.3689 | 2.0 | 1000 | 0.3853 | 13.4054 |
| 0.1884 | 4.0 | 2000 | 0.3488 | 11.9817 |
| 0.1059 | 6.0 | 3000 | 0.3607 | 11.9350 |
| 0.0634 | 8.0 | 4000 | 0.3820 | 12.0409 |
### Framework versions
- Transformers 4.42.0.dev0
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.19.1
|
DokHee/Llama-3-Open-Ko-8B-Instruct-alphaEdu100-gguf | DokHee | 2024-05-28T08:23:12Z | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"llama",
"gguf",
"en",
"base_model:beomi/Llama-3-Open-Ko-8B",
"base_model:finetune:beomi/Llama-3-Open-Ko-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-28T08:23:11Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: beomi/Llama-3-Open-Ko-8B
---
# Uploaded model
- **Developed by:** DokHee
- **License:** apache-2.0
- **Finetuned from model :** beomi/Llama-3-Open-Ko-8B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
DaichiT/copper_alloy | DaichiT | 2024-05-28T08:22:42Z | 29 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:stabilityai/stable-diffusion-2",
"base_model:finetune:stabilityai/stable-diffusion-2",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-05-28T08:15:11Z | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
base_model: stabilityai/stable-diffusion-2
inference: true
instance_prompt: a photo of sks copper_alloy
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - DaichiT/copper_alloy
This is a dreambooth model derived from stabilityai/stable-diffusion-2. The weights were trained on a photo of sks copper_alloy using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
LiteLLMs/free-evo-qwen72b-v0.8-re-GGUF | LiteLLMs | 2024-05-28T08:21:50Z | 34 | 0 | transformers | [
"transformers",
"gguf",
"GGUF",
"en",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-05-08T13:03:06Z |
---
language:
- en
license: mit
library_name: transformers
tags:
- GGUF
model-index:
- name: free-evo-qwen72b-v0.8-re
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 79.86
name: normalized accuracy
verified: false
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=freewheelin/free-evo-qwen72b-v0.8-re
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 91.34
name: normalized accuracy
verified: false
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=freewheelin/free-evo-qwen72b-v0.8-re
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 78
name: accuracy
verified: false
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=freewheelin/free-evo-qwen72b-v0.8-re
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 74.85
verified: false
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=freewheelin/free-evo-qwen72b-v0.8-re
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 87.77
name: accuracy
verified: false
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=freewheelin/free-evo-qwen72b-v0.8-re
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 75.89
name: accuracy
verified: false
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=freewheelin/free-evo-qwen72b-v0.8-re
name: Open LLM Leaderboard
quantized_by: andrijdavid
---
# free-evo-qwen72b-v0.8-re-GGUF
- Original model: [free-evo-qwen72b-v0.8-re](https://huggingface.co/freewheelin/free-evo-qwen72b-v0.8-re)
<!-- description start -->
## Description
This repo contains GGUF format model files for [free-evo-qwen72b-v0.8-re](https://huggingface.co/freewheelin/free-evo-qwen72b-v0.8-re).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
* [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
* [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
* [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
* [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
* [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
<!-- README_GGUF.md-about-gguf end -->
<!-- compatibility_gguf start -->
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: LiteLLMs/free-evo-qwen72b-v0.8-re-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download LiteLLMs/free-evo-qwen72b-v0.8-re-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download LiteLLMs/free-evo-qwen72b-v0.8-re-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install huggingface_hub[hf_transfer]
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/free-evo-qwen72b-v0.8-re-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<PROMPT>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: free-evo-qwen72b-v0.8-re
# Model Card for free-evo-qwen72b-v0.8
## Developed by : [Freewheelin](https://freewheelin-recruit.oopy.io/) AI Technical Team
## 2024 4th May - avg. 81.28 [Open Llm Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
| Metric | Value |
| - | ----: |
| Avg. | 81.28 |
| ARC (25-Shot) | 79.86 |
| HellaSwag (10-Shot) | 91.32 |
| MMLU (5-Shot) | 78.00 |
| TruthfulQA (0-shot) | 74.85 |
| Winogrande (5-shot) | 87.77 |
| GSM8k (5-shot) | 75.89 |
## Method
- We were inspired by this [Sakana project](https://sakana.ai/evolutionary-model-merge/)
## Process
You need two models with the same architecture.
- Choose one model and fine-tune it to create a gap between the original model and the fine-tuned one. It doesn't matter whether the evaluation score is higher or lower.
- Merge the two models.
- Evaluate the merged model.
- Fine-tune a specific evaluation part of the model if you need to increase the score for that part. (It's unlikely to work as you think, but you can try it.)
- Merge the models again.
- Evaluate again.
- Keep going until the average evaluation score is higher than the original one.
That's it. Simple.
You can create a framework to automate this process.
## Base Architecture
- QWEN2
## Base Models
- several QWEN2 based models
<!-- original-model-card end -->
|
JiAYu1997/HRJD_FinetuneV2_1 | JiAYu1997 | 2024-05-28T08:19:37Z | 0 | 0 | null | [
"trl",
"sft",
"generated_from_trainer",
"base_model:taide/Llama3-TAIDE-LX-8B-Chat-Alpha1",
"base_model:finetune:taide/Llama3-TAIDE-LX-8B-Chat-Alpha1",
"license:other",
"region:us"
] | null | 2024-05-28T08:01:13Z | ---
license: other
base_model: taide/Llama3-TAIDE-LX-8B-Chat-Alpha1
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: HRJD_FinetuneV2_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# HRJD_FinetuneV2_1
This model is a fine-tuned version of [taide/Llama3-TAIDE-LX-8B-Chat-Alpha1](https://huggingface.co/taide/Llama3-TAIDE-LX-8B-Chat-Alpha1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.33.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.13.3
|
ConvLLaVA/ConvLLaVA-ConvNeXt-1536 | ConvLLaVA | 2024-05-28T08:16:54Z | 2,032 | 1 | transformers | [
"transformers",
"pytorch",
"convnext",
"arxiv:2405.15738",
"endpoints_compatible",
"region:us"
] | null | 2024-05-25T08:36:29Z | # ConvNeXt Model Card
## Model details
**Model type:** ConvNeXt is an open-source visual encoder trained by fine-tuning LLM on multimodal caption and instruction-following data. The base model is: laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft-soup.
**Model date:** ConvLLaVA-ConvNeXt-1536 was trained in March 2024.
Paper or resources for more information: https://github.com/alibaba/conv-llava/
Where to send questions or comments about the model: https://github.com/alibaba/conv-llava/issues
## Intended use
**Primary intended uses:** The primary use of ConvLLaVA-ConvNeXt is research on large multimodal models and chatbots.
## Paper
arxiv.org/abs/2405.15738
|
zacll/chinese-adult-novel | zacll | 2024-05-28T08:16:29Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-28T07:21:27Z | ---
license: apache-2.0
---
|
ConvLLaVA/ConvLLaVA-ConvNeXt-768 | ConvLLaVA | 2024-05-28T08:16:24Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"convnext",
"arxiv:2405.15738",
"endpoints_compatible",
"region:us"
] | null | 2024-05-25T08:35:58Z | # ConvNeXt Model Card
## Model details
**Model type:** ConvNeXt is an open-source visual encoder trained by fine-tuning LLM on multimodal caption and instruction-following data. The base model is: laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft-soup.
**Model date:** ConvLLaVA-ConvNeXt-768 was trained in March 2024.
Paper or resources for more information: https://github.com/alibaba/conv-llava/
Where to send questions or comments about the model: https://github.com/alibaba/conv-llava/issues
## Intended use
**Primary intended uses:** The primary use of ConvLLaVA-ConvNeXt is research on large multimodal models and chatbots.
## Paper
arxiv.org/abs/2405.15738
|
DaichiT/copper | DaichiT | 2024-05-28T08:12:47Z | 29 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:stabilityai/stable-diffusion-2",
"base_model:finetune:stabilityai/stable-diffusion-2",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-05-28T08:05:05Z | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
base_model: stabilityai/stable-diffusion-2
inference: true
instance_prompt: a photo of sks copper
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - DaichiT/copper
This is a dreambooth model derived from stabilityai/stable-diffusion-2. The weights were trained on a photo of sks copper using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
joeyiexec/peftllama2 | joeyiexec | 2024-05-28T08:12:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-28T08:12:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DaichiT/concrete | DaichiT | 2024-05-28T08:12:21Z | 30 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:stabilityai/stable-diffusion-2",
"base_model:finetune:stabilityai/stable-diffusion-2",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-05-28T08:04:30Z | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
base_model: stabilityai/stable-diffusion-2
inference: true
instance_prompt: a photo of sks concrete
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - DaichiT/concrete
This is a dreambooth model derived from stabilityai/stable-diffusion-2. The weights were trained on a photo of sks concrete using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
lgk03/WITHINAPPS_NDD-addressbook_test-content_tags | lgk03 | 2024-05-28T08:11:18Z | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-28T08:04:40Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: WITHINAPPS_NDD-addressbook_test-content_tags
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# WITHINAPPS_NDD-addressbook_test-content_tags
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1455
- Accuracy: 0.9489
- F1: 0.9500
- Precision: 0.9560
- Recall: 0.9489
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 0.9953 | 53 | 0.1517 | 0.9489 | 0.9500 | 0.9560 | 0.9489 |
| No log | 1.9906 | 106 | 0.1455 | 0.9489 | 0.9500 | 0.9560 | 0.9489 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
ConvLLaVA/ConvLLaVA-ConvNeXt-1024 | ConvLLaVA | 2024-05-28T08:10:26Z | 177 | 0 | transformers | [
"transformers",
"pytorch",
"convnext",
"arxiv:2405.15738",
"endpoints_compatible",
"region:us"
] | null | 2024-05-25T08:36:09Z | # ConvNeXt Model Card
## Model details
**Model type:** ConvNeXt is an open-source visual encoder trained by fine-tuning LLM on multimodal caption and instruction-following data. The base model is: laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft-soup.
**Model date:** ConvLLaVA-ConvNeXt-1024 was trained in March 2024.
Paper or resources for more information: https://github.com/alibaba/conv-llava/
Where to send questions or comments about the model: https://github.com/alibaba/conv-llava/issues
## Intended use
**Primary intended uses:** The primary use of ConvLLaVA-ConvNeXt is research on large multimodal models and chatbots.
## Paper
arxiv.org/abs/2405.15738
|
SerchiBoi/DTT-Chatbot-Piloto-v4 | SerchiBoi | 2024-05-28T08:09:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-2b-it-bnb-4bit",
"base_model:finetune:unsloth/gemma-2b-it-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-28T08:08:37Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
base_model: unsloth/gemma-2b-it-bnb-4bit
---
# Uploaded model
- **Developed by:** SerchiBoi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2b-it-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ferrazzipietro/Llama-2-7b-chat-hf_adapters_en.layer1_NoQuant_torch.bfloat16_16_32_0.01_2_0.0002 | ferrazzipietro | 2024-05-28T08:09:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-14T18:19:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
John6666/juggernaut-xl-rundiffusion-hyper-sdxl | John6666 | 2024-05-28T08:07:50Z | 348 | 5 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-05-28T08:03:08Z | ---
license: other
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
---
Original model is [here](https://civitai.com/models/133005?modelVersionId=471120).
|
DiederikMartens/tsBERT_sa_cv_9_fold0 | DiederikMartens | 2024-05-28T08:07:24Z | 109 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:igorsterner/german-english-code-switching-bert",
"base_model:finetune:igorsterner/german-english-code-switching-bert",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-28T07:46:21Z | ---
license: mit
base_model: igorsterner/german-english-code-switching-bert
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: tsBERT_sa_cv_9_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tsBERT_sa_cv_9_fold0
This model is a fine-tuned version of [igorsterner/german-english-code-switching-bert](https://huggingface.co/igorsterner/german-english-code-switching-bert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4935
- F1: 0.7006
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 325 | 0.4017 | 0.6081 |
| 0.4472 | 2.0 | 650 | 0.4388 | 0.6617 |
| 0.4472 | 3.0 | 975 | 0.4935 | 0.7006 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
xX-FANE-Xx/koala-13B-HF-Q2_K-GGUF | xX-FANE-Xx | 2024-05-28T08:07:00Z | 7 | 0 | transformers | [
"transformers",
"gguf",
"koala",
"ShareGPT",
"llama",
"gptq",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"dataset:RyokoAI/ShareGPT52K",
"dataset:Hello-SimpleAI/HC3",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-28T08:06:43Z | ---
license: other
library_name: transformers
tags:
- koala
- ShareGPT
- llama
- gptq
- llama-cpp
- gguf-my-repo
datasets:
- RyokoAI/ShareGPT52K
- Hello-SimpleAI/HC3
pipeline_tag: text-generation
---
# xX-FANE-Xx/koala-13B-HF-Q2_K-GGUF
This model was converted to GGUF format from [`TheBloke/koala-13B-HF`](https://huggingface.co/TheBloke/koala-13B-HF) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/TheBloke/koala-13B-HF) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo xX-FANE-Xx/koala-13B-HF-Q2_K-GGUF --model koala-13b-hf-q2_k.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo xX-FANE-Xx/koala-13B-HF-Q2_K-GGUF --model koala-13b-hf-q2_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && \
cd llama.cpp && \
make && \
./main -m koala-13b-hf-q2_k.gguf -n 128
```
|
RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf | RichardErkhov | 2024-05-28T08:06:25Z | 15 | 0 | null | [
"gguf",
"arxiv:2308.10882",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-27T11:31:21Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Giraffe-v2-70b-32k - GGUF
- Model creator: https://huggingface.co/abacusai/
- Original model: https://huggingface.co/abacusai/Giraffe-v2-70b-32k/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Giraffe-v2-70b-32k.Q2_K.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/blob/main/Giraffe-v2-70b-32k.Q2_K.gguf) | Q2_K | 23.71GB |
| [Giraffe-v2-70b-32k.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/blob/main/Giraffe-v2-70b-32k.IQ3_XS.gguf) | IQ3_XS | 26.37GB |
| [Giraffe-v2-70b-32k.IQ3_S.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/blob/main/Giraffe-v2-70b-32k.IQ3_S.gguf) | IQ3_S | 27.86GB |
| [Giraffe-v2-70b-32k.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/blob/main/Giraffe-v2-70b-32k.Q3_K_S.gguf) | Q3_K_S | 27.86GB |
| [Giraffe-v2-70b-32k.IQ3_M.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/blob/main/Giraffe-v2-70b-32k.IQ3_M.gguf) | IQ3_M | 28.82GB |
| [Giraffe-v2-70b-32k.Q3_K.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/blob/main/Giraffe-v2-70b-32k.Q3_K.gguf) | Q3_K | 30.99GB |
| [Giraffe-v2-70b-32k.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/blob/main/Giraffe-v2-70b-32k.Q3_K_M.gguf) | Q3_K_M | 30.99GB |
| [Giraffe-v2-70b-32k.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/blob/main/Giraffe-v2-70b-32k.Q3_K_L.gguf) | Q3_K_L | 33.67GB |
| [Giraffe-v2-70b-32k.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/blob/main/Giraffe-v2-70b-32k.IQ4_XS.gguf) | IQ4_XS | 34.64GB |
| [Giraffe-v2-70b-32k.Q4_0.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/blob/main/Giraffe-v2-70b-32k.Q4_0.gguf) | Q4_0 | 36.2GB |
| [Giraffe-v2-70b-32k.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/blob/main/Giraffe-v2-70b-32k.IQ4_NL.gguf) | IQ4_NL | 36.55GB |
| [Giraffe-v2-70b-32k.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/blob/main/Giraffe-v2-70b-32k.Q4_K_S.gguf) | Q4_K_S | 36.55GB |
| [Giraffe-v2-70b-32k.Q4_K.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/tree/main/) | Q4_K | 38.58GB |
| [Giraffe-v2-70b-32k.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/tree/main/) | Q4_K_M | 38.58GB |
| [Giraffe-v2-70b-32k.Q4_1.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/tree/main/) | Q4_1 | 40.2GB |
| [Giraffe-v2-70b-32k.Q5_0.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/tree/main/) | Q5_0 | 44.2GB |
| [Giraffe-v2-70b-32k.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/tree/main/) | Q5_K_S | 44.2GB |
| [Giraffe-v2-70b-32k.Q5_K.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/tree/main/) | Q5_K | 45.41GB |
| [Giraffe-v2-70b-32k.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/tree/main/) | Q5_K_M | 45.41GB |
| [Giraffe-v2-70b-32k.Q5_1.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/tree/main/) | Q5_1 | 48.2GB |
| [Giraffe-v2-70b-32k.Q6_K.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/tree/main/) | Q6_K | 52.7GB |
| [Giraffe-v2-70b-32k.Q8_0.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/tree/main/) | Q8_0 | 68.26GB |
Original model description:
---
tags:
- llama2
---

## Model Details
### Model Description
We have followed up on our previous training runs related to extending the context length
of Llama models. The associated github repository
https://github.com/abacusai/long-context
has some basic details on our approach and metrics. We have also published a paper on arXiv
that covers our experiments and analysis a lot more comprehensively.
http://arxiv.org/abs/2308.10882
- **Developed by:** [Abacus.AI](https://abacus.ai)
- **Model type:** Transformer based autoregressive causal language model
- **License:** Llama 2 Community License: https://github.com/facebookresearch/llama/blob/main/LICENSE
- **Finetuned from model:** Llama V2 70B
### Usage
To use this model at longer lengths the model needs to be patched to interpolate the longer context
lengths. It will not work if it is simply loaded with the `AutoModel` framework of `transformers`.
For full details and usage see:
https://github.com/abacusai/Long-Context
The evaluation section has detailed code for how to load and patch the model for inference (or further fine-tuning).
Note in particular the `max_position_embeddings` is not relevant since the patched module dynamically reallocates
the position buffers as required.
The tokenizer corresponding to this model is https://huggingface.co/abacusai/Giraffe-v1-Tokenizer.
Using the code in the repository you can load this model with the following code:
```python
from models import load_model, load_tokenizer
tokenizer = load_tokenizer()
model = load_model('abacusai/Giraffe-v2-70b-32k', scale=8)
```
|
RichardErkhov/Sao10K_-_Euryale-1.4-L2-70B-gguf | RichardErkhov | 2024-05-28T08:05:15Z | 23 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-05-27T09:21:52Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Euryale-1.4-L2-70B - GGUF
- Model creator: https://huggingface.co/Sao10K/
- Original model: https://huggingface.co/Sao10K/Euryale-1.4-L2-70B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Euryale-1.4-L2-70B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Euryale-1.4-L2-70B-gguf/blob/main/Euryale-1.4-L2-70B.Q2_K.gguf) | Q2_K | 23.71GB |
| [Euryale-1.4-L2-70B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Euryale-1.4-L2-70B-gguf/blob/main/Euryale-1.4-L2-70B.IQ3_XS.gguf) | IQ3_XS | 26.37GB |
| [Euryale-1.4-L2-70B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Euryale-1.4-L2-70B-gguf/blob/main/Euryale-1.4-L2-70B.IQ3_S.gguf) | IQ3_S | 27.86GB |
| [Euryale-1.4-L2-70B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Euryale-1.4-L2-70B-gguf/blob/main/Euryale-1.4-L2-70B.Q3_K_S.gguf) | Q3_K_S | 27.86GB |
| [Euryale-1.4-L2-70B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Euryale-1.4-L2-70B-gguf/blob/main/Euryale-1.4-L2-70B.IQ3_M.gguf) | IQ3_M | 28.82GB |
| [Euryale-1.4-L2-70B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Euryale-1.4-L2-70B-gguf/blob/main/Euryale-1.4-L2-70B.Q3_K.gguf) | Q3_K | 30.99GB |
| [Euryale-1.4-L2-70B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Euryale-1.4-L2-70B-gguf/blob/main/Euryale-1.4-L2-70B.Q3_K_M.gguf) | Q3_K_M | 30.99GB |
| [Euryale-1.4-L2-70B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Euryale-1.4-L2-70B-gguf/blob/main/Euryale-1.4-L2-70B.Q3_K_L.gguf) | Q3_K_L | 33.67GB |
| [Euryale-1.4-L2-70B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Euryale-1.4-L2-70B-gguf/blob/main/Euryale-1.4-L2-70B.IQ4_XS.gguf) | IQ4_XS | 34.64GB |
| [Euryale-1.4-L2-70B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Euryale-1.4-L2-70B-gguf/blob/main/Euryale-1.4-L2-70B.Q4_0.gguf) | Q4_0 | 36.2GB |
| [Euryale-1.4-L2-70B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Euryale-1.4-L2-70B-gguf/blob/main/Euryale-1.4-L2-70B.IQ4_NL.gguf) | IQ4_NL | 36.55GB |
| [Euryale-1.4-L2-70B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Euryale-1.4-L2-70B-gguf/blob/main/Euryale-1.4-L2-70B.Q4_K_S.gguf) | Q4_K_S | 36.55GB |
| [Euryale-1.4-L2-70B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Euryale-1.4-L2-70B-gguf/tree/main/) | Q4_K | 38.58GB |
| [Euryale-1.4-L2-70B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Euryale-1.4-L2-70B-gguf/tree/main/) | Q4_K_M | 38.58GB |
| [Euryale-1.4-L2-70B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Euryale-1.4-L2-70B-gguf/tree/main/) | Q4_1 | 40.2GB |
| [Euryale-1.4-L2-70B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Euryale-1.4-L2-70B-gguf/tree/main/) | Q5_0 | 44.2GB |
| [Euryale-1.4-L2-70B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Euryale-1.4-L2-70B-gguf/tree/main/) | Q5_K_S | 44.2GB |
| [Euryale-1.4-L2-70B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Euryale-1.4-L2-70B-gguf/tree/main/) | Q5_K | 45.41GB |
| [Euryale-1.4-L2-70B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Euryale-1.4-L2-70B-gguf/tree/main/) | Q5_K_M | 45.41GB |
| [Euryale-1.4-L2-70B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Euryale-1.4-L2-70B-gguf/tree/main/) | Q5_1 | 48.2GB |
| [Euryale-1.4-L2-70B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Euryale-1.4-L2-70B-gguf/tree/main/) | Q6_K | 52.7GB |
| [Euryale-1.4-L2-70B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Euryale-1.4-L2-70B-gguf/tree/main/) | Q8_0 | 68.26GB |
Original model description:
---
license: llama2
language:
- en
---
gguf quants: https://huggingface.co/Sao10K/Euryale-1.4-L2-70B-GGUF
1.3, but better? I guess.
Base Merged Model ratios adjusted.
NSFL portion of Hesperus v1 dataset trained and applied.
LimaRP merged in at a ~25% weight at the end.
Subjectively better in some aspects eg. long form rp, worse than the other, eg. chat-style rps.
overall a minor improvement in my eyes.
1.5 will include Hesperus v2 dataset in its entirety.
format: alpaca.
|
alijawad07/aya-23-8B-AWQ-GEMM | alijawad07 | 2024-05-28T08:01:37Z | 90 | 2 | transformers | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] | text-generation | 2024-05-28T06:40:24Z | # Aya-23-8B - AWQ Quantized
- Model creator: [Cohere For AI](https://huggingface.co/cohere-for-ai)
- Original model: [Aya-23-8B](https://huggingface.co/CohereForAI/aya-23-8B)
<!-- description start -->
## Description
This repo contains AWQ model files for [Cohere's Aya-23-8B](https://huggingface.co/CohereForAI/aya-23-8B).
Aya 23 is an open weights research release of an instruction fine-tuned model with highly advanced multilingual capabilities. The model focuses on pairing a highly performant pre-trained Command family of models with the recently released Aya Collection. The result is a powerful multilingual large language model serving 23 languages.
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference.
It is also now supported by continuous batching server [vLLM](https://github.com/vllm-project/vllm), allowing use of AWQ models for high-throughput concurrent inference in multi-user server scenarios. Note that, at the time of writing, overall throughput is still lower than running vLLM with unquantized models. However, using AWQ enables using much smaller GPUs which can lead to easier deployment and overall cost savings. For example, a 70B model can be run on 1 x 48GB GPU instead of 2 x 80GB.
<!-- description end -->
## Model Summary
Aya 23 is an open weights research release of an instruction fine-tuned model with highly advanced multilingual capabilities. Aya 23 focuses on pairing a highly performant pre-trained [Command family](https://huggingface.co/CohereForAI/c4ai-command-r-plus) of models with the recently released [Aya Collection](https://huggingface.co/datasets/CohereForAI/aya_collection). The result is a powerful multilingual large language model serving 23 languages.
It covers 23 languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese.
Developed by: [Cohere For AI](https://cohere.for.ai) and [Cohere](https://cohere.com/)
- Model: aya-23-8B-AWQ-GEMM
- Model Size: 8 billion parameters
- Bits: 4
- Q-Group Size: 128
**This is an AWQ quantized version of the Aya-23-8B model using AutoAWQ.**
### Usage
Please install transformers from the source repository that includes the necessary changes for this model.
```python
# pip install transformers==4.41.1
# pip install autoawq
from transformers import AutoTokenizer
from awq import AutoAWQForCausalLM
quant_path = "path/to/quantized/model"
tokenizer = AutoTokenizer.from_pretrained(quant_path)
model = AutoAWQForCausalLM.from_quantized(quant_path, fuse_layers=True)
# Format message with the command-r-plus chat template
messages = [{"role": "user", "content": "Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN>Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
```
## Model Details
**Input**: Models input text only.
**Output**: Models generate text only.
**Model Architecture**: Aya-23-8B is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model is fine-tuned (IFT) to follow human instructions.
**Languages covered**: The model is particularly optimized for multilinguality and supports the following languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese
**Context length**: 8192
Please refer to the [Aya 23 technical report](https://cohere.com/research/papers/aya-command-23-8b-and-35b-technical-report-2024-05-23) for further details about the base model, data, instruction tuning, and evaluation.
|
TomTom42/q-FrozenLake-v1-4x4-noSlippery | TomTom42 | 2024-05-28T08:01:33Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-28T08:01:29Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="TomTom42/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
thewordsmiths/mistral_dpo | thewordsmiths | 2024-05-28T08:00:44Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/mistral-7b",
"base_model:adapter:unsloth/mistral-7b",
"region:us"
] | null | 2024-05-28T07:59:38Z | ---
library_name: peft
base_model: unsloth/mistral-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
mansaripo/thewordsmiths | mansaripo | 2024-05-28T07:55:51Z | 3 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/llama-3-8b",
"base_model:adapter:unsloth/llama-3-8b",
"region:us"
] | null | 2024-05-28T07:52:36Z | ---
library_name: peft
base_model: unsloth/llama-3-8b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
bradleymarques/my-test-model | bradleymarques | 2024-05-28T07:54:40Z | 0 | 0 | null | [
"arxiv:1910.09700",
"region:us"
] | null | 2024-05-28T07:54:04Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
iron-huray/llama_test | iron-huray | 2024-05-28T07:53:22Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2024-05-22T00:58:26Z | ---
license: llama3
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: meta-llama/Meta-Llama-3-8B-Instruct
model-index:
- name: llama_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama_test
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2 |
IneG/RoBERTa_pretrained_litcov10K | IneG | 2024-05-28T07:51:15Z | 117 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-28T07:48:00Z | ---
tags:
- generated_from_trainer
model-index:
- name: RoBERTa_pretrained_litcov10K
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RoBERTa_pretrained_litcov10K
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
DiederikMartens/eBERT_sa_cv_12_fold9 | DiederikMartens | 2024-05-28T07:46:13Z | 109 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-28T07:38:58Z | ---
license: apache-2.0
base_model: google-bert/bert-base-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: eBERT_sa_cv_12_fold9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eBERT_sa_cv_12_fold9
This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5047
- F1: 0.5356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 226 | 0.4773 | 0.4302 |
| No log | 2.0 | 452 | 0.4493 | 0.5255 |
| 0.5125 | 3.0 | 678 | 0.5047 | 0.5356 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
DiederikMartens/mBERT_sa_cv_12_fold9 | DiederikMartens | 2024-05-28T07:44:56Z | 109 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-28T07:34:25Z | ---
license: apache-2.0
base_model: google-bert/bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: mBERT_sa_cv_12_fold9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mBERT_sa_cv_12_fold9
This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4492
- F1: 0.5742
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 226 | 0.4549 | 0.4987 |
| No log | 2.0 | 452 | 0.4037 | 0.5291 |
| 0.4719 | 3.0 | 678 | 0.4492 | 0.5742 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
DiederikMartens/tsBERT_sa_cv_12_fold9 | DiederikMartens | 2024-05-28T07:43:11Z | 109 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:igorsterner/german-english-code-switching-bert",
"base_model:finetune:igorsterner/german-english-code-switching-bert",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-28T07:33:22Z | ---
license: mit
base_model: igorsterner/german-english-code-switching-bert
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: tsBERT_sa_cv_12_fold9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tsBERT_sa_cv_12_fold9
This model is a fine-tuned version of [igorsterner/german-english-code-switching-bert](https://huggingface.co/igorsterner/german-english-code-switching-bert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4022
- F1: 0.5954
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 226 | 0.3970 | 0.5359 |
| No log | 2.0 | 452 | 0.4022 | 0.5954 |
| 0.3494 | 3.0 | 678 | 0.4937 | 0.5953 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
John6666/nsfw-anime-xl-v1-sdxl | John6666 | 2024-05-28T07:41:29Z | 36 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-05-28T07:37:02Z | ---
license: other
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
---
Original model is [here](https://civitai.com/models/461074/nsfw-animexl).
|
DiederikMartens/eBERT_sa_cv_12_fold8 | DiederikMartens | 2024-05-28T07:38:53Z | 109 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-28T07:23:40Z | ---
license: apache-2.0
base_model: google-bert/bert-base-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: eBERT_sa_cv_12_fold8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eBERT_sa_cv_12_fold8
This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5513
- F1: 0.4990
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 226 | 0.5354 | 0.4015 |
| No log | 2.0 | 452 | 0.5639 | 0.3975 |
| 0.5216 | 3.0 | 678 | 0.5513 | 0.4990 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
huypn16/MetaMath-DeepSeekMath-7B | huypn16 | 2024-05-28T07:37:32Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-22T09:45:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
furkanbicer/Taxi-v3 | furkanbicer | 2024-05-28T07:35:00Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-28T07:34:58Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="furkanbicer/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
TurkuNLP/xlmr-qa-extraction-en | TurkuNLP | 2024-05-28T07:34:48Z | 166 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-11-02T09:59:09Z | ---
license: cc-by-nc-sa-4.0
library_name: transformers
pipeline_tag: token-classification
widget:
- text: "Do you think that looks like a cat? Answer: I don't think so."
- example_title: "cat"
---
### xlm-roberta-base for token classification, specifically fine-tuned for question-answer extraction for English
This is the `xlm-roberta-base`, fine-tuned on manually annotated Finnish data and ChatGPT-annotated data.
### Hyperparameters
```
batch_size = 8
epochs = 10 (trained for less)
base_LM_model = "xlm-roberta-base"
max_seq_len = 512
learning_rate = 5e-5
```
### Performance
```
Accuracy = 0.88
Question F1 = 0.77
Answer F1 = 0.81
```
### Usage
To get the best question-answer pairs use the huggingface pipeline with no aggregation strategy and do some post-processing like in this [script](https://github.com/TurkuNLP/register-qa/blob/main/token-classification/scripts/extract_qa_en_no_entropy.py).
## Citing
To cite this model use the following bibtex.
```
@inproceedings{eskelinen-etal-2024-building-question,
title = "Building Question-Answer Data Using Web Register Identification",
author = "Eskelinen, Anni and
Myntti, Amanda and
Henriksson, Erik and
Pyysalo, Sampo and
Laippala, Veronika",
editor = "Calzolari, Nicoletta and
Kan, Min-Yen and
Hoste, Veronique and
Lenci, Alessandro and
Sakti, Sakriani and
Xue, Nianwen",
booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.lrec-main.234",
pages = "2595--2611",
abstract = "This article introduces a resource-efficient method for developing question-answer (QA) datasets by extracting QA pairs from web-scale data using machine learning (ML). Our method benefits from recent advances in web register (genre) identification and consists of two ML steps with an additional post-processing step. First, using XLM-R and the multilingual CORE web register corpus series with categories such as QA Forum, we train a multilingual classifier to retrieve documents that are likely to contain QA pairs from web-scale data. Second, we develop a NER-style token classifier to identify the QA text spans within these documents. To this end, we experiment with training on a semi-synthetic dataset built on top of the English LFQA, a small set of manually cleaned web QA pairs in English and Finnish, and a Finnish web QA pair dataset cleaned using ChatGPT. The evaluation of our pipeline demonstrates its capability to efficiently retrieve a substantial volume of QA pairs. While the approach is adaptable to any language given the availability of language models and extensive web data, we showcase its efficiency in English and Finnish, developing the first open, non-synthetic and non-machine translated QA dataset for Finnish {--} Turku WebQA {--} comprising over 200,000 QA pairs.",
}
``` |
TurkuNLP/xlmr-qa-extraction-fi | TurkuNLP | 2024-05-28T07:34:37Z | 162 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-11-02T09:38:23Z | ---
license: cc-by-nc-sa-4.0
library_name: transformers
pipeline_tag: token-classification
widget:
- text: "Kysymys: Onko tuo kissa? Vastaus: En osaa sanoa."
---
### xlm-roberta-base for token classification, specifically fine-tuned for question-answer extraction for Finnish
This is the `xlm-roberta-base`, fine-tuned on manually annotated Finnish data, ChatGPT-annotated data and a semi-synthetic dataset based on the LFQA dataset.
### Hyperparameters
```
batch_size = 8
epochs = 10 (trained for less)
base_LM_model = "xlm-roberta-base"
max_seq_len = 512
learning_rate = 1e-5
```
### Performance
```
Accuracy = 0.85
Question F1 = 0.82
Answer F1 = 0.75
```
### Usage
To get the best question-answer pairs use the huggingface pipeline with no aggregation strategy and do some post-processing like in this [script](https://github.com/TurkuNLP/register-qa/blob/main/token-classification/scripts/extract_qa_fi_no_entropy.py).
### Citing
To cite this model use the following bibtex.
```
@inproceedings{eskelinen-etal-2024-building-question,
title = "Building Question-Answer Data Using Web Register Identification",
author = "Eskelinen, Anni and
Myntti, Amanda and
Henriksson, Erik and
Pyysalo, Sampo and
Laippala, Veronika",
editor = "Calzolari, Nicoletta and
Kan, Min-Yen and
Hoste, Veronique and
Lenci, Alessandro and
Sakti, Sakriani and
Xue, Nianwen",
booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.lrec-main.234",
pages = "2595--2611",
abstract = "This article introduces a resource-efficient method for developing question-answer (QA) datasets by extracting QA pairs from web-scale data using machine learning (ML). Our method benefits from recent advances in web register (genre) identification and consists of two ML steps with an additional post-processing step. First, using XLM-R and the multilingual CORE web register corpus series with categories such as QA Forum, we train a multilingual classifier to retrieve documents that are likely to contain QA pairs from web-scale data. Second, we develop a NER-style token classifier to identify the QA text spans within these documents. To this end, we experiment with training on a semi-synthetic dataset built on top of the English LFQA, a small set of manually cleaned web QA pairs in English and Finnish, and a Finnish web QA pair dataset cleaned using ChatGPT. The evaluation of our pipeline demonstrates its capability to efficiently retrieve a substantial volume of QA pairs. While the approach is adaptable to any language given the availability of language models and extensive web data, we showcase its efficiency in English and Finnish, developing the first open, non-synthetic and non-machine translated QA dataset for Finnish {--} Turku WebQA {--} comprising over 200,000 QA pairs.",
}
``` |
furkanbicer/q-FrozenLake-v1-4x4-noSlippery | furkanbicer | 2024-05-28T07:33:57Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-28T07:33:55Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="furkanbicer/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mesolitica/llava-v1.6-34b-hf-awq | mesolitica | 2024-05-28T07:32:19Z | 96 | 0 | transformers | [
"transformers",
"safetensors",
"llava_next",
"image-text-to-text",
"conversational",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] | image-text-to-text | 2024-05-28T07:09:37Z | ---
library_name: transformers
tags: []
---
# Llava-1.6 34B AWQ
You need to use this forked, https://github.com/WanBenLe/AutoAWQ-with-llava-v1.6 |
ferrazzipietro/Llama-2-7b-chat-hf_adapters_en.layer1_NoQuant_torch.bfloat16_16_32_0.01_1_0.0002 | ferrazzipietro | 2024-05-28T07:28:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-27T17:12:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Tanvi03/finetunever3-raredata | Tanvi03 | 2024-05-28T07:28:30Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-28T03:17:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
haturusinghe/LLAMA3-Finetune-v1-1.41_loss-May-28-2024 | haturusinghe | 2024-05-28T07:25:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-28T07:25:14Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** haturusinghe
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
DiederikMartens/gBERT_sa_cv_12_fold8 | DiederikMartens | 2024-05-28T07:22:46Z | 109 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-german-cased",
"base_model:finetune:google-bert/bert-base-german-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-28T07:10:09Z | ---
license: mit
base_model: google-bert/bert-base-german-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: gBERT_sa_cv_12_fold8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gBERT_sa_cv_12_fold8
This model is a fine-tuned version of [google-bert/bert-base-german-cased](https://huggingface.co/google-bert/bert-base-german-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5238
- F1: 0.6375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 226 | 0.4182 | 0.5045 |
| No log | 2.0 | 452 | 0.5894 | 0.6292 |
| 0.3404 | 3.0 | 678 | 0.5238 | 0.6375 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
DaichiT/box | DaichiT | 2024-05-28T07:18:05Z | 29 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:stabilityai/stable-diffusion-2",
"base_model:finetune:stabilityai/stable-diffusion-2",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-05-28T07:12:58Z | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
base_model: stabilityai/stable-diffusion-2
inference: true
instance_prompt: a photo of sks box
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - DaichiT/box
This is a dreambooth model derived from stabilityai/stable-diffusion-2. The weights were trained on a photo of sks box using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
imrgurmeet/qwen1.5-llm-quantized | imrgurmeet | 2024-05-28T07:15:14Z | 5 | 1 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-27T17:25:36Z | The "qwen1.5-llm-quantized" model is a quantized version of the original Qwen1.5-110B model. Qwen1.5 is a transformer-based decoder-only language model that has been pretrained on a large amount of data. The improvements in Qwen1.5 include multiple model sizes, ranging from 0.5B to 110B dense models, as well as an MoE (Mixture of Experts) model of 14B with 2.7B activated. These models show significant performance improvements in chat models and provide multilingual support for both base and chat models. They also offer stable support for a 32K context length for models of all sizes. The quantized version of the model has undergone a quantization process, which reduces the model size and computational requirements while maintaining its performance.
For more details about the original Qwen1.5-110B model, you can refer to the blog post and GitHub repository provided by the Qwen team at Alibaba Cloud.
"https://huggingface.co/Qwen/Qwen1.5-110B" "https://github.com/QwenLM/Qwen1.5" |
sunoaiysha/fine-tuned-gpt2 | sunoaiysha | 2024-05-28T07:13:15Z | 133 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-25T19:14:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
adhityaprimandhika/mistral_categorization_unsloth_q4_v2_gguf | adhityaprimandhika | 2024-05-28T07:10:15Z | 4 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/mistral-7b-v0.3-bnb-4bit",
"base_model:quantized:unsloth/mistral-7b-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-28T07:06:39Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
base_model: unsloth/mistral-7b-v0.3-bnb-4bit
---
# Uploaded model
- **Developed by:** adhityaprimandhika
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
DiederikMartens/gBERT_sa_cv_12_fold7 | DiederikMartens | 2024-05-28T07:10:05Z | 109 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-german-cased",
"base_model:finetune:google-bert/bert-base-german-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-28T06:57:27Z | ---
license: mit
base_model: google-bert/bert-base-german-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: gBERT_sa_cv_12_fold7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gBERT_sa_cv_12_fold7
This model is a fine-tuned version of [google-bert/bert-base-german-cased](https://huggingface.co/google-bert/bert-base-german-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4881
- F1: 0.7384
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 226 | 0.4008 | 0.5145 |
| No log | 2.0 | 452 | 0.4047 | 0.6607 |
| 0.3287 | 3.0 | 678 | 0.4881 | 0.7384 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
aalexzhang/Flair-It-RoBERTa-usc | aalexzhang | 2024-05-28T07:09:22Z | 181 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-28T07:09:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DiederikMartens/mBERT_sa_cv_12_fold6 | DiederikMartens | 2024-05-28T07:06:48Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-28T06:53:13Z | ---
license: apache-2.0
base_model: google-bert/bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: mBERT_sa_cv_12_fold6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mBERT_sa_cv_12_fold6
This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5131
- F1: 0.5977
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 226 | 0.4875 | 0.4515 |
| No log | 2.0 | 452 | 0.3963 | 0.5102 |
| 0.4398 | 3.0 | 678 | 0.5131 | 0.5977 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
subhavarshith/donut-demo_exp3_NO_earlystop_exp4_1280 | subhavarshith | 2024-05-28T07:04:46Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-05-28T05:09:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
CK0607/ko-ok-test | CK0607 | 2024-05-28T07:02:54Z | 77 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"base_model:finetune:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-28T07:01:09Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** CK0607
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
WDKT/Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B | WDKT | 2024-05-28T07:01:27Z | 3,810 | 5 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"zh",
"en",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-21T05:14:21Z | ---
license: llama3
language:
- zh
- en
pipeline_tag: text-generation
---
<div align="center">
<picture>
<img src="https://github.com/xiangxinai/XiangxinLM/blob/main/assets/logo.png?raw=true" width="150px">
</picture>
</div>
<div align="center">
<h1>
Xiangxin-2XL-Chat-1048k
</h1>
</div>
我们提供私有化模型训练服务,如果您需要训练行业模型、领域模型或者私有模型,请联系我们: [email protected]
We offer customized model training services. If you need to train industry-specific models, domain-specific models, or private models, please contact us at: [email protected].
# <span id="Introduction">模型介绍/Introduction</span>
Xiangxin-2XL-Chat-1048k是[象信AI](https://www.xiangxinai.cn)基于Meta Llama-3-70B-Instruct模型和[Gradient AI的扩充上下文的工作](https://huggingface.co/gradientai/Llama-3-70B-Instruct-Gradient-1048k),利用自行研发的中文价值观对齐数据集进行ORPO训练而形成的Chat模型。该模型具备更强的中文能力和中文价值观,其上下文长度达到100万字。在模型性能方面,该模型在ARC、HellaSwag、MMLU、TruthfulQA_mc2、Winogrande、GSM8K_flex、CMMLU、CEVAL-VALID等八项测评中,取得了平均分70.22分的成绩,超过了Gradientai-Llama-3-70B-Instruct-Gradient-1048k。我们的训练数据并不包含任何测评数据集。
Xiangxin-2XL-Chat-1048k is a Chat model developed by [Xiangxin AI](https://www.xiangxinai.cn), based on the Meta Llama-3-70B-Instruct model and [expanded context from Gradient AI](https://huggingface.co/gradientai/Llama-3-70B-Instruct-Gradient-1048k). It was trained using a proprietary Chinese value-aligned dataset through ORPO training, resulting in enhanced Chinese proficiency and alignment with Chinese values. The model has a context length of up to 1 million words. In terms of performance, it surpassed the Gradientai-Llama-3-70B-Instruct-Gradient-1048k model with an average score of 70.22 across eight evaluations including ARC, HellaSwag, MMLU, TruthfulQA_mc2, Winogrande, GSM8K_flex, CMMLU, and C-EVAL. It's worth noting that our training data did not include any evaluation datasets.
<div align="center">
Model | Context Length | Pre-trained Tokens
| :------------: | :------------: | :------------: |
| Xiangxin-2XL-Chat-1048k | 1048k | 15T
</div>
# <span id="Benchmark">Benchmark 结果/Benchmark Evaluation</span>
| | **Average** | **ARC** | **HellaSwag** | **MMLU** | **TruthfulQA** | **Winogrande** | **GSM8K** | **CMMLU** | **CEVAL** |
|:-----------------------:|:----------:|:--------:|:---------:|:----------:|:-----------:|:-------:|:-------:|:-------:|:-------:|
|**Xiangxin-2XL-Chat-1048k**| 70.22 | 60.92 | 83.29 |75.13| 57.33| 76.64| 81.05| 65.40| 62.03 |
|**Llama-3-70B-Instruct-Gradient-1048k**| 69.66| 61.18 |82.88 |74.95 |55.28 |75.77 |77.79 |66.44 |63.00|
Note:truthfulqa_mc2, gsm8k flexible-extract
# <span id="Training">训练过程模型/Training</span>
该模型是使用ORPO技术和自行研发的中文价值观对齐数据集进行训练的。由于内容的敏感性,该数据集无法公开披露。
The model was trained using ORPO and a proprietary Chinese alignment dataset developed in-house. Due to the sensitivity of the content, the dataset cannot be publicly disclosed.
## Training loss

## Reward accuracies

## SFT loss

# <span id="Start">快速开始/Quick Start</span>
## Use with transformers
You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both.
使用Transformers运行本模型推理需要约400GB的显存。
Running inference with this model using Transformers requires approximately 400GB of GPU memory.
### Transformers pipeline
```python
import transformers
import torch
model_id = "xiangxinai/Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": ""},
{"role": "user", "content": "解释一下“温故而知新”"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
“温故而知新”是中国古代的一句成语,出自《论语·子路篇》。
它的意思是通过温习过去的知识和经验,来获得新的理解和见解。
这里的“温故”是指温习过去,回顾历史,复习旧知识,
而“知新”则是指了解新鲜事物,掌握新知识。
这个成语强调学习的循序渐进性,强调在学习新知识时,
不能忽视过去的基础,而是要在继承和发扬的基础上,去理解和创新。
```
### Transformers AutoModelForCausalLM
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "xiangxinai/Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": ""},
{"role": "user", "content": "解释一下“温故而知新”"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
“温故而知新”是中国古代的一句成语,出自《论语·子路篇》。
它的意思是通过温习过去的知识和经验,来获得新的理解和见解。
这里的“温故”是指温习过去,回顾历史,复习旧知识,
而“知新”则是指了解新鲜事物,掌握新知识。
这个成语强调学习的循序渐进性,强调在学习新知识时,
不能忽视过去的基础,而是要在继承和发扬的基础上,去理解和创新。
```
# 协议/License
This code is licensed under the META LLAMA 3 COMMUNITY LICENSE AGREEMENT License.
# 联系我们/Contact Us
For inquiries, please contact us via email at [email protected]. |
claudios/CodeBERTa-small-v1 | claudios | 2024-05-28T07:00:03Z | 108 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"code",
"dataset:code_search_net",
"arxiv:1909.09436",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-28T06:54:33Z | ---
language: code
thumbnail: https://cdn-media.huggingface.co/CodeBERTa/CodeBERTa.png
datasets:
- code_search_net
---
This is an *unofficial* reupload of [huggingface/CodeBERTa-small-v1](https://huggingface.co/huggingface/CodeBERTa-small-v1) in the `SafeTensors` format using `transformers` `4.41.1`. The goal of this reupload is to prevent older models that are still relevant baselines from becoming stale as a result of changes in HuggingFace. Additionally, I may include minor corrections, such as model max length configuration.
Original model card below:
---
# CodeBERTa
CodeBERTa is a RoBERTa-like model trained on the [CodeSearchNet](https://github.blog/2019-09-26-introducing-the-codesearchnet-challenge/) dataset from GitHub.
Supported languages:
```shell
"go"
"java"
"javascript"
"php"
"python"
"ruby"
```
The **tokenizer** is a Byte-level BPE tokenizer trained on the corpus using Hugging Face `tokenizers`.
Because it is trained on a corpus of code (vs. natural language), it encodes the corpus efficiently (the sequences are between 33% to 50% shorter, compared to the same corpus tokenized by gpt2/roberta).
The (small) **model** is a 6-layer, 84M parameters, RoBERTa-like Transformer model – that’s the same number of layers & heads as DistilBERT – initialized from the default initialization settings and trained from scratch on the full corpus (~2M functions) for 5 epochs.
### Tensorboard for this training ⤵️
[](https://tensorboard.dev/experiment/irRI7jXGQlqmlxXS0I07ew/#scalars)
## Quick start: masked language modeling prediction
```python
PHP_CODE = """
public static <mask> set(string $key, $value) {
if (!in_array($key, self::$allowedKeys)) {
throw new \InvalidArgumentException('Invalid key given');
}
self::$storedValues[$key] = $value;
}
""".lstrip()
```
### Does the model know how to complete simple PHP code?
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="huggingface/CodeBERTa-small-v1",
tokenizer="huggingface/CodeBERTa-small-v1"
)
fill_mask(PHP_CODE)
## Top 5 predictions:
#
' function' # prob 0.9999827146530151
'function' #
' void' #
' def' #
' final' #
```
### Yes! That was easy 🎉 What about some Python (warning: this is going to be meta)
```python
PYTHON_CODE = """
def pipeline(
task: str,
model: Optional = None,
framework: Optional[<mask>] = None,
**kwargs
) -> Pipeline:
pass
""".lstrip()
```
Results:
```python
'framework', 'Framework', ' framework', 'None', 'str'
```
> This program can auto-complete itself! 😱
### Just for fun, let's try to mask natural language (not code):
```python
fill_mask("My name is <mask>.")
# {'sequence': '<s> My name is undefined.</s>', 'score': 0.2548016905784607, 'token': 3353}
# {'sequence': '<s> My name is required.</s>', 'score': 0.07290805131196976, 'token': 2371}
# {'sequence': '<s> My name is null.</s>', 'score': 0.06323737651109695, 'token': 469}
# {'sequence': '<s> My name is name.</s>', 'score': 0.021919190883636475, 'token': 652}
# {'sequence': '<s> My name is disabled.</s>', 'score': 0.019681859761476517, 'token': 7434}
```
This (kind of) works because code contains comments (which contain natural language).
Of course, the most frequent name for a Computer scientist must be undefined 🤓.
## Downstream task: [programming language identification](https://huggingface.co/huggingface/CodeBERTa-language-id)
See the model card for **[`huggingface/CodeBERTa-language-id`](https://huggingface.co/huggingface/CodeBERTa-language-id)** 🤯.
<br>
## CodeSearchNet citation
<details>
```bibtex
@article{husain_codesearchnet_2019,
title = {{CodeSearchNet} {Challenge}: {Evaluating} the {State} of {Semantic} {Code} {Search}},
shorttitle = {{CodeSearchNet} {Challenge}},
url = {http://arxiv.org/abs/1909.09436},
urldate = {2020-03-12},
journal = {arXiv:1909.09436 [cs, stat]},
author = {Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc},
month = sep,
year = {2019},
note = {arXiv: 1909.09436},
}
```
</details> |
RedaAlami/t5_recommendation_sports_equipment_english2 | RedaAlami | 2024-05-28T06:59:02Z | 113 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-28T06:32:02Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5_recommendation_sports_equipment_english2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5_recommendation_sports_equipment_english2
This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5359
- Rouge1: 74.1270
- Rouge2: 66.6667
- Rougel: 74.1270
- Rougelsum: 73.8095
- Gen Len: 4.0476
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 80
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 1 | 9.9716 | 12.4868 | 0.0 | 12.5845 | 12.5051 | 19.0 |
| No log | 2.0 | 2 | 10.1466 | 9.9134 | 0.0 | 9.9471 | 9.8413 | 19.0 |
| No log | 3.0 | 3 | 8.3378 | 10.5739 | 0.0 | 10.6349 | 10.5291 | 19.0 |
| No log | 4.0 | 4 | 7.3021 | 10.5739 | 0.0 | 10.6349 | 10.5291 | 19.0 |
| No log | 5.0 | 5 | 6.3242 | 10.4605 | 0.0 | 10.5471 | 10.4567 | 19.0 |
| No log | 6.0 | 6 | 5.4331 | 10.2886 | 0.7937 | 10.2319 | 10.3793 | 19.0 |
| No log | 7.0 | 7 | 4.7152 | 10.8989 | 0.7937 | 10.8388 | 10.9525 | 18.9524 |
| No log | 8.0 | 8 | 3.9937 | 13.9421 | 3.7009 | 14.0590 | 13.9456 | 15.0952 |
| No log | 9.0 | 9 | 3.1163 | 16.0431 | 1.0025 | 15.7736 | 15.9707 | 6.4762 |
| No log | 10.0 | 10 | 2.3306 | 23.1746 | 7.1429 | 22.8571 | 23.6508 | 4.1429 |
| No log | 11.0 | 11 | 1.9695 | 21.2698 | 7.1429 | 20.9524 | 21.4286 | 4.0476 |
| No log | 12.0 | 12 | 1.5552 | 23.8095 | 7.1429 | 23.3333 | 23.8095 | 3.9048 |
| No log | 13.0 | 13 | 0.8986 | 9.0476 | 0.0 | 9.0476 | 9.0476 | 3.7619 |
| No log | 14.0 | 14 | 0.7398 | 17.4603 | 2.3810 | 18.2540 | 17.4603 | 4.1905 |
| No log | 15.0 | 15 | 0.6966 | 12.6984 | 0.0 | 12.6984 | 12.6984 | 3.6667 |
| No log | 16.0 | 16 | 0.6352 | 32.5397 | 14.2857 | 32.5397 | 32.5397 | 3.7619 |
| No log | 17.0 | 17 | 0.5722 | 43.6508 | 23.8095 | 43.6508 | 42.8571 | 4.0952 |
| No log | 18.0 | 18 | 0.5628 | 43.6508 | 23.8095 | 43.6508 | 42.8571 | 3.8571 |
| No log | 19.0 | 19 | 0.5526 | 43.1746 | 23.8095 | 43.1746 | 42.8571 | 3.8571 |
| No log | 20.0 | 20 | 0.5522 | 48.4127 | 38.0952 | 48.4127 | 48.4127 | 3.7619 |
| No log | 21.0 | 21 | 0.5201 | 42.8571 | 28.5714 | 42.8571 | 42.3810 | 4.2381 |
| No log | 22.0 | 22 | 0.5262 | 37.1429 | 19.0476 | 36.9841 | 36.9841 | 4.2857 |
| No log | 23.0 | 23 | 0.5093 | 37.6190 | 23.8095 | 37.6190 | 37.6190 | 4.1429 |
| No log | 24.0 | 24 | 0.4818 | 45.3175 | 33.3333 | 45.2381 | 45.2381 | 4.1429 |
| No log | 25.0 | 25 | 0.4547 | 50.7937 | 38.0952 | 50.7937 | 50.7937 | 4.1429 |
| No log | 26.0 | 26 | 0.4455 | 50.7937 | 38.0952 | 50.7937 | 50.7937 | 4.1429 |
| No log | 27.0 | 27 | 0.4660 | 53.1746 | 42.8571 | 53.1746 | 53.1746 | 4.0476 |
| No log | 28.0 | 28 | 0.4825 | 53.1746 | 42.8571 | 53.1746 | 53.1746 | 4.0 |
| No log | 29.0 | 29 | 0.4928 | 53.1746 | 42.8571 | 53.1746 | 53.1746 | 4.0476 |
| No log | 30.0 | 30 | 0.4838 | 57.7778 | 42.8571 | 57.2222 | 57.5397 | 4.0476 |
| No log | 31.0 | 31 | 0.4955 | 60.3175 | 47.6190 | 60.3175 | 60.3175 | 4.0476 |
| No log | 32.0 | 32 | 0.5066 | 62.6984 | 52.3810 | 62.6984 | 62.6984 | 4.1429 |
| No log | 33.0 | 33 | 0.5189 | 62.6984 | 52.3810 | 62.6984 | 62.6984 | 4.1905 |
| No log | 34.0 | 34 | 0.5234 | 62.6984 | 52.3810 | 62.6984 | 62.6984 | 4.1905 |
| No log | 35.0 | 35 | 0.5225 | 62.6984 | 52.3810 | 62.6984 | 62.6984 | 4.1905 |
| No log | 36.0 | 36 | 0.5225 | 62.6984 | 52.3810 | 62.6984 | 62.6984 | 4.1905 |
| No log | 37.0 | 37 | 0.5058 | 62.8571 | 52.3810 | 62.2222 | 62.6984 | 4.1429 |
| No log | 38.0 | 38 | 0.4861 | 69.8413 | 61.9048 | 69.8413 | 69.8413 | 4.1905 |
| No log | 39.0 | 39 | 0.4625 | 69.8413 | 61.9048 | 69.8413 | 69.8413 | 4.1905 |
| No log | 40.0 | 40 | 0.4438 | 72.2222 | 66.6667 | 72.2222 | 72.2222 | 4.0952 |
| No log | 41.0 | 41 | 0.4231 | 72.2222 | 66.6667 | 72.2222 | 72.2222 | 4.0952 |
| No log | 42.0 | 42 | 0.4073 | 72.2222 | 66.6667 | 72.2222 | 72.2222 | 4.0952 |
| No log | 43.0 | 43 | 0.3938 | 72.2222 | 66.6667 | 72.2222 | 72.2222 | 4.0952 |
| No log | 44.0 | 44 | 0.3912 | 72.2222 | 66.6667 | 72.2222 | 72.2222 | 4.0952 |
| No log | 45.0 | 45 | 0.3980 | 72.2222 | 66.6667 | 72.2222 | 72.2222 | 4.1429 |
| No log | 46.0 | 46 | 0.4062 | 72.2222 | 66.6667 | 72.2222 | 72.2222 | 4.1905 |
| No log | 47.0 | 47 | 0.4121 | 76.9841 | 71.4286 | 76.9841 | 76.9841 | 4.2857 |
| No log | 48.0 | 48 | 0.4150 | 76.9841 | 71.4286 | 76.9841 | 76.9841 | 4.1905 |
| No log | 49.0 | 49 | 0.4183 | 76.9841 | 71.4286 | 76.9841 | 76.9841 | 4.1429 |
| No log | 50.0 | 50 | 0.4205 | 76.9841 | 71.4286 | 76.9841 | 76.9841 | 4.1905 |
| No log | 51.0 | 51 | 0.4306 | 79.3651 | 76.1905 | 79.3651 | 79.3651 | 4.0952 |
| No log | 52.0 | 52 | 0.4411 | 76.5079 | 71.4286 | 76.5079 | 76.1905 | 4.0 |
| No log | 53.0 | 53 | 0.4526 | 76.5079 | 71.4286 | 76.5079 | 76.1905 | 4.0476 |
| No log | 54.0 | 54 | 0.4667 | 76.5079 | 71.4286 | 76.5079 | 76.1905 | 4.0 |
| No log | 55.0 | 55 | 0.4871 | 76.5079 | 71.4286 | 76.5079 | 76.1905 | 4.0 |
| No log | 56.0 | 56 | 0.5063 | 76.5079 | 71.4286 | 76.5079 | 76.1905 | 4.0 |
| No log | 57.0 | 57 | 0.5196 | 76.5079 | 71.4286 | 76.5079 | 76.1905 | 4.0 |
| No log | 58.0 | 58 | 0.5265 | 76.5079 | 71.4286 | 76.5079 | 76.1905 | 3.9524 |
| No log | 59.0 | 59 | 0.5308 | 76.5079 | 71.4286 | 76.5079 | 76.1905 | 3.9524 |
| No log | 60.0 | 60 | 0.5333 | 76.5079 | 71.4286 | 76.5079 | 76.1905 | 3.9524 |
| No log | 61.0 | 61 | 0.5344 | 76.5079 | 71.4286 | 76.5079 | 76.1905 | 3.9524 |
| No log | 62.0 | 62 | 0.5348 | 76.5079 | 71.4286 | 76.5079 | 76.1905 | 3.9524 |
| No log | 63.0 | 63 | 0.5354 | 76.5079 | 71.4286 | 76.5079 | 76.1905 | 3.9524 |
| No log | 64.0 | 64 | 0.5359 | 76.5079 | 71.4286 | 76.5079 | 76.1905 | 3.9524 |
| No log | 65.0 | 65 | 0.5359 | 74.1270 | 66.6667 | 74.1270 | 73.8095 | 4.0476 |
| No log | 66.0 | 66 | 0.5359 | 74.1270 | 66.6667 | 74.1270 | 73.8095 | 4.0476 |
| No log | 67.0 | 67 | 0.5359 | 74.1270 | 66.6667 | 74.1270 | 73.8095 | 4.0476 |
| No log | 68.0 | 68 | 0.5359 | 74.1270 | 66.6667 | 74.1270 | 73.8095 | 4.0476 |
| No log | 69.0 | 69 | 0.5359 | 74.1270 | 66.6667 | 74.1270 | 73.8095 | 4.0476 |
| No log | 70.0 | 70 | 0.5359 | 74.1270 | 66.6667 | 74.1270 | 73.8095 | 4.0476 |
| No log | 71.0 | 71 | 0.5359 | 74.1270 | 66.6667 | 74.1270 | 73.8095 | 4.0476 |
| No log | 72.0 | 72 | 0.5359 | 74.1270 | 66.6667 | 74.1270 | 73.8095 | 4.0476 |
| No log | 73.0 | 73 | 0.5359 | 74.1270 | 66.6667 | 74.1270 | 73.8095 | 4.0476 |
| No log | 74.0 | 74 | 0.5359 | 74.1270 | 66.6667 | 74.1270 | 73.8095 | 4.0476 |
| No log | 75.0 | 75 | 0.5359 | 74.1270 | 66.6667 | 74.1270 | 73.8095 | 4.0476 |
| No log | 76.0 | 76 | 0.5359 | 74.1270 | 66.6667 | 74.1270 | 73.8095 | 4.0476 |
| No log | 77.0 | 77 | 0.5359 | 74.1270 | 66.6667 | 74.1270 | 73.8095 | 4.0476 |
| No log | 78.0 | 78 | 0.5359 | 74.1270 | 66.6667 | 74.1270 | 73.8095 | 4.0476 |
| No log | 79.0 | 79 | 0.5359 | 74.1270 | 66.6667 | 74.1270 | 73.8095 | 4.0476 |
| No log | 80.0 | 80 | 0.5359 | 74.1270 | 66.6667 | 74.1270 | 73.8095 | 4.0476 |
### Framework versions
- Transformers 4.26.0
- Pytorch 2.3.0+cu121
- Datasets 2.8.0
- Tokenizers 0.13.3
|
DiederikMartens/mBERT_sa_cv_12_fold5 | DiederikMartens | 2024-05-28T06:53:07Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-28T06:39:39Z | ---
license: apache-2.0
base_model: google-bert/bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: mBERT_sa_cv_12_fold5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mBERT_sa_cv_12_fold5
This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4607
- F1: 0.5275
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 226 | 0.6423 | 0.2958 |
| No log | 2.0 | 452 | 0.5093 | 0.5167 |
| 0.5972 | 3.0 | 678 | 0.4607 | 0.5275 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
John6666/cherry-picker-xl-v3-sdxl | John6666 | 2024-05-28T06:53:05Z | 93 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-05-28T06:47:16Z | ---
license: other
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
---
Original model is [here](https://civitai.com/models/125680?modelVersionId=373927).
|
fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-822545 | fine-tuned | 2024-05-28T06:50:08Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"Debate",
"Argument",
"Opposition",
"Rebuttal",
"Discussion",
"en",
"dataset:fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-822545",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-28T06:49:37Z | ---
license: apache-2.0
datasets:
- fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-822545
- allenai/c4
language:
- en
- en
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- Debate
- Argument
- Opposition
- Rebuttal
- Discussion
---
This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case:
counter arguments in a debate
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-822545',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
yasyf/bert-for-patents | yasyf | 2024-05-28T06:47:21Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"bert",
"feature-extraction",
"masked-lm",
"en",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-28T06:28:52Z | ---
language:
- en
tags:
- masked-lm
- pytorch
pipeline-tag: "fill-mask"
mask-token: "[MASK]"
widget:
- text: "The present [MASK] provides a torque sensor that is small and highly rigid and for which high production efficiency is possible."
- text: "The present invention relates to [MASK] accessories and pertains particularly to a brake light unit for bicycles."
- text: "The present invention discloses a space-bound-free [MASK] and its coordinate determining circuit for determining a coordinate of a stylus pen."
- text: "The illuminated [MASK] includes a substantially translucent canopy supported by a plurality of ribs pivotally swingable towards and away from a shaft."
license: apache-2.0
metrics:
- perplexity
---
# Motivation
This model is based on anferico/bert-for-patents - a BERT<sub>LARGE</sub> model (See next section for details below). By default, the pre-trained model's output embeddings with size 768 (base-models) or with size 1024 (large-models). However, when you store Millions of embeddings, this can require quite a lot of memory/storage. So have reduced the embedding dimension to 64 i.e 1/16th of 1024 using Principle Component Analysis (PCA) and it still gives a comparable performance. Yes! PCA gives better performance than NMF. Note: This process neither improves the runtime, nor the memory requirement for running the model. It only reduces the needed space to store embeddings, for example, for semantic search using vector databases.
# BERT for Patents
BERT for Patents is a model trained by Google on 100M+ patents (not just US patents).
If you want to learn more about the model, check out the [blog post](https://cloud.google.com/blog/products/ai-machine-learning/how-ai-improves-patent-analysis), [white paper](https://services.google.com/fh/files/blogs/bert_for_patents_white_paper.pdf) and [GitHub page](https://github.com/google/patents-public-data/blob/master/models/BERT%20for%20Patents.md) containing the original TensorFlow checkpoint.
---
### Projects using this model (or variants of it):
- [Patents4IPPC](https://github.com/ec-jrc/Patents4IPPC) (carried out by [Pi School](https://picampus-school.com/) and commissioned by the [Joint Research Centre (JRC)](https://ec.europa.eu/jrc/en) of the European Commission)
|
sahlebrahim/bert-finetuned-squad | sahlebrahim | 2024-05-28T06:46:31Z | 43 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-05-13T09:25:22Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
DiederikMartens/gBERT_sa_cv_12_fold5 | DiederikMartens | 2024-05-28T06:44:52Z | 111 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-german-cased",
"base_model:finetune:google-bert/bert-base-german-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-28T06:32:20Z | ---
license: mit
base_model: google-bert/bert-base-german-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: gBERT_sa_cv_12_fold5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gBERT_sa_cv_12_fold5
This model is a fine-tuned version of [google-bert/bert-base-german-cased](https://huggingface.co/google-bert/bert-base-german-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4564
- F1: 0.6400
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 226 | 0.4027 | 0.5657 |
| No log | 2.0 | 452 | 0.4462 | 0.5591 |
| 0.3464 | 3.0 | 678 | 0.4564 | 0.6400 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-69882 | fine-tuned | 2024-05-28T06:43:13Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"Argument",
"Debate",
"Opposition",
"Persuasion",
"Refutation",
"custom_code",
"en",
"dataset:fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-69882",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-28T06:42:58Z | ---
license: apache-2.0
datasets:
- fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-69882
- allenai/c4
language:
- en
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- Argument
- Debate
- Opposition
- Persuasion
- Refutation
---
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case:
counter argument retrieval system
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-69882',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
aariz120/tiny-chatbot-dpo | aariz120 | 2024-05-28T06:43:11Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-05-19T06:34:43Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- dpo
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
model-index:
- name: tiny-chatbot-dpo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-chatbot-dpo
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
DiederikMartens/eBERT_sa_cv_12_fold4 | DiederikMartens | 2024-05-28T06:41:00Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-28T06:26:44Z | ---
license: apache-2.0
base_model: google-bert/bert-base-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: eBERT_sa_cv_12_fold4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eBERT_sa_cv_12_fold4
This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5774
- F1: 0.4941
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 226 | 0.5470 | 0.4529 |
| No log | 2.0 | 452 | 0.4903 | 0.4753 |
| 0.5054 | 3.0 | 678 | 0.5774 | 0.4941 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
adhityaprimandhika/mistral_categorization_unsloth_lora_adapter_v2 | adhityaprimandhika | 2024-05-28T06:40:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-28T01:43:13Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/mistral-7b-v0.3-bnb-4bit
---
# Uploaded model
- **Developed by:** adhityaprimandhika
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
pduy395/custom-roberta | pduy395 | 2024-05-28T06:36:48Z | 163 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-28T06:32:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
adlbh/llama-2-7b-medinstruct-52k | adlbh | 2024-05-28T06:35:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-2-7b-bnb-4bit",
"base_model:finetune:unsloth/llama-2-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-28T06:33:32Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-2-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** adlbh
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-2-7b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
state-spaces/mamba2-2.7b | state-spaces | 2024-05-28T06:34:15Z | 2,676 | 14 | transformers | [
"transformers",
"pytorch",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-28T06:23:28Z | ---
license: apache-2.0
---
|
scoliono/groupchat_lora_llama3_8b | scoliono | 2024-05-28T06:33:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-28T06:33:01Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** scoliono
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
DiederikMartens/gBERT_sa_cv_12_fold4 | DiederikMartens | 2024-05-28T06:32:16Z | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-german-cased",
"base_model:finetune:google-bert/bert-base-german-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-28T06:19:43Z | ---
license: mit
base_model: google-bert/bert-base-german-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: gBERT_sa_cv_12_fold4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gBERT_sa_cv_12_fold4
This model is a fine-tuned version of [google-bert/bert-base-german-cased](https://huggingface.co/google-bert/bert-base-german-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5119
- F1: 0.6835
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 226 | 0.3834 | 0.5321 |
| No log | 2.0 | 452 | 0.4565 | 0.6399 |
| 0.3375 | 3.0 | 678 | 0.5119 | 0.6835 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
CMU-AIR2/math-phi-1-5-FULL-Arithmetic-Steps-lr-1-5e-6-6k | CMU-AIR2 | 2024-05-28T06:31:36Z | 121 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-28T06:29:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Amadkour/wav2vec2-large-xls-r-300m-tr-softkour | Amadkour | 2024-05-28T06:28:33Z | 25 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_11_0",
"base_model:Amadkour/wav2vec2-large-xls-r-300m-tr-softkour",
"base_model:finetune:Amadkour/wav2vec2-large-xls-r-300m-tr-softkour",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-03-30T21:00:27Z | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: Amadkour/wav2vec2-large-xls-r-300m-tr-softkour
datasets:
- common_voice_11_0
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-300m-tr-softkour
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: common_voice_11_0
type: common_voice_11_0
config: ar
split: test
args: ar
metrics:
- type: wer
value: 0.44904159531569354
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-tr-softkour
This model is a fine-tuned version of [Amadkour/wav2vec2-large-xls-r-300m-tr-softkour](https://huggingface.co/Amadkour/wav2vec2-large-xls-r-300m-tr-softkour) on the common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4793
- Wer: 0.4490
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4662 | 0.33 | 400 | 0.7627 | 0.6241 |
| 0.3927 | 0.67 | 800 | 0.7286 | 0.6213 |
| 0.4613 | 1.0 | 1200 | 0.5779 | 0.5185 |
| 0.4552 | 1.33 | 1600 | 0.5412 | 0.4945 |
| 0.4145 | 1.66 | 2000 | 0.4922 | 0.4652 |
| 0.3713 | 2.0 | 2400 | 0.4793 | 0.4490 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cpu
- Datasets 2.18.0
- Tokenizers 0.15.2
|
state-spaces/mamba2-1.3b | state-spaces | 2024-05-28T06:27:37Z | 17,958 | 3 | transformers | [
"transformers",
"pytorch",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-28T06:23:10Z | ---
license: apache-2.0
---
|
DiederikMartens/eBERT_sa_cv_12_fold3 | DiederikMartens | 2024-05-28T06:26:39Z | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-28T06:12:30Z | ---
license: apache-2.0
base_model: google-bert/bert-base-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: eBERT_sa_cv_12_fold3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eBERT_sa_cv_12_fold3
This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5914
- F1: 0.4973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 226 | 0.6089 | 0.3445 |
| No log | 2.0 | 452 | 0.4911 | 0.4798 |
| 0.5244 | 3.0 | 678 | 0.5914 | 0.4973 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
state-spaces/mamba2-780m | state-spaces | 2024-05-28T06:26:12Z | 2,931 | 1 | transformers | [
"transformers",
"pytorch",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-28T06:19:43Z | ---
license: apache-2.0
---
|
DiederikMartens/mBERT_sa_cv_12_fold3 | DiederikMartens | 2024-05-28T06:25:44Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-28T06:12:05Z | ---
license: apache-2.0
base_model: google-bert/bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: mBERT_sa_cv_12_fold3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mBERT_sa_cv_12_fold3
This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5104
- F1: 0.5693
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 226 | 0.5091 | 0.4490 |
| No log | 2.0 | 452 | 0.4197 | 0.5448 |
| 0.4564 | 3.0 | 678 | 0.5104 | 0.5693 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
DiederikMartens/tsBERT_sa_cv_12_fold3 | DiederikMartens | 2024-05-28T06:25:32Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:igorsterner/german-english-code-switching-bert",
"base_model:finetune:igorsterner/german-english-code-switching-bert",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-28T06:11:55Z | ---
license: mit
base_model: igorsterner/german-english-code-switching-bert
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: tsBERT_sa_cv_12_fold3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tsBERT_sa_cv_12_fold3
This model is a fine-tuned version of [igorsterner/german-english-code-switching-bert](https://huggingface.co/igorsterner/german-english-code-switching-bert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4248
- F1: 0.6486
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 226 | 0.3531 | 0.5399 |
| No log | 2.0 | 452 | 0.3575 | 0.6417 |
| 0.3511 | 3.0 | 678 | 0.4248 | 0.6486 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
ainiyo002/model | ainiyo002 | 2024-05-28T06:16:59Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/mistral-7b-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-v0.3-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-27T07:23:20Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
base_model: unsloth/mistral-7b-v0.3-bnb-4bit
---
# Uploaded model
- **Developed by:** ainiyo002
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kkeezz/cap-iaa-lora | kkeezz | 2024-05-28T06:16:51Z | 2 | 0 | peft | [
"peft",
"mplug_owl2",
"region:us"
] | null | 2024-05-28T06:09:48Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
jinq047/distilbert-base-uncased-finetuned-imdb | jinq047 | 2024-05-28T06:16:51Z | 118 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-28T05:53:18Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4894
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6819 | 1.0 | 157 | 2.4978 |
| 2.5872 | 2.0 | 314 | 2.4488 |
| 2.527 | 3.0 | 471 | 2.4823 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
nerdthingz/moon_landing | nerdthingz | 2024-05-28T06:15:32Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-28T06:03:30Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 263.50 +/- 23.68
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
DiederikMartens/eBERT_sa_cv_12_fold2 | DiederikMartens | 2024-05-28T06:12:26Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-28T05:58:41Z | ---
license: apache-2.0
base_model: google-bert/bert-base-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: eBERT_sa_cv_12_fold2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eBERT_sa_cv_12_fold2
This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5381
- F1: 0.5716
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 226 | 0.5964 | 0.4234 |
| No log | 2.0 | 452 | 0.5521 | 0.4536 |
| 0.4957 | 3.0 | 678 | 0.5381 | 0.5716 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
0xfaskety/Qwen-Qwen1.5-1.8B-1716875866 | 0xfaskety | 2024-05-28T06:10:53Z | 131 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-28T05:57:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DiederikMartens/gBERT_sa_cv_12_fold2 | DiederikMartens | 2024-05-28T06:06:50Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-german-cased",
"base_model:finetune:google-bert/bert-base-german-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-28T05:54:34Z | ---
license: mit
base_model: google-bert/bert-base-german-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: gBERT_sa_cv_12_fold2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gBERT_sa_cv_12_fold2
This model is a fine-tuned version of [google-bert/bert-base-german-cased](https://huggingface.co/google-bert/bert-base-german-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4152
- F1: 0.6707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 226 | 0.3664 | 0.5390 |
| No log | 2.0 | 452 | 0.4152 | 0.6707 |
| 0.3358 | 3.0 | 678 | 0.5516 | 0.6571 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Subsets and Splits