modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-05-31 18:27:08
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 461
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-05-31 18:26:36
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
jiandong/crimson-embedding-large-v1.5 | jiandong | 2024-01-29T16:17:07Z | 46 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-01-28T09:50:20Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 22604 with parameters:
```
{'batch_size': 10, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.OnlineContrastiveLoss.OnlineContrastiveLoss`
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 50,
"evaluator": "sentence_transformers.evaluation.InformationRetrievalEvaluator.InformationRetrievalEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 4520,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
yagami69light/JarvisAI | yagami69light | 2024-01-29T16:16:10Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2024-01-29T16:16:10Z | ---
license: bigscience-openrail-m
---
|
LoneStriker/natural-functions-6.0bpw-h6-exl2 | LoneStriker | 2024-01-29T16:07:56Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"dataset:glaiveai/glaive-function-calling-v2",
"dataset:LDJnr/Capybara",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-29T16:05:34Z | ---
library_name: transformers
license: apache-2.0
datasets:
- glaiveai/glaive-function-calling-v2
- LDJnr/Capybara
---
# NaturalFunctions-7B
Mistral-7B finetuned for Function Calling
# Examples
## **System Prompt**
You are a helpful assistant with access to the following functions.
Use them if required -
```json
{
"name": "create_event",
"description": "Create a new event in the calendar",
"parameters": {
"type": "object",
"properties": {
"title": {
"type": "string",
"description": "The title of the event"
},
"date": {
"type": "string",
"format": "date",
"description": "The date of the event"
},
"time": {
"type": "string",
"format": "time",
"description": "The time of the event"
},
"location": {
"type": "string",
"description": "The location of the event (optional)"
}
},
"required": [
"title",
"date",
"time"
]
}
}
```

## **System Prompt**
You are a helpful assistant with access to the following functions. Use them if required
```json
{
"name": "calculate_shipping_cost",
"description": "Calculate the shipping cost for a package",
"parameters": {
"type": "object",
"properties": {
"weight": {
"type": "number",
"description": "The weight of the package in kilograms"
},
"destination": {
"type": "string",
"description": "The destination address"
}
},
"required": [
"weight",
"destination"
]
}
}
```

|
xuykin/ex-de | xuykin | 2024-01-29T16:07:34Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-29T15:43:17Z | ---
license: creativeml-openrail-m
---
|
DiogoVilela03/q-FrozenLake-v1-4x4-noSlippery | DiogoVilela03 | 2024-01-29T16:05:01Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-01-29T16:04:59Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="DiogoVilela03/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
hardik-id/devoteam-icon-generator | hardik-id | 2024-01-29T16:04:03Z | 40 | 3 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-01-29T15:32:54Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### devoteam-icon-generator Dreambooth model trained by hardik-id with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:




|
LoneStriker/natural-functions-4.0bpw-h6-exl2 | LoneStriker | 2024-01-29T16:03:28Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"dataset:glaiveai/glaive-function-calling-v2",
"dataset:LDJnr/Capybara",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-29T16:01:46Z | ---
library_name: transformers
license: apache-2.0
datasets:
- glaiveai/glaive-function-calling-v2
- LDJnr/Capybara
---
# NaturalFunctions-7B
Mistral-7B finetuned for Function Calling
# Examples
## **System Prompt**
You are a helpful assistant with access to the following functions.
Use them if required -
```json
{
"name": "create_event",
"description": "Create a new event in the calendar",
"parameters": {
"type": "object",
"properties": {
"title": {
"type": "string",
"description": "The title of the event"
},
"date": {
"type": "string",
"format": "date",
"description": "The date of the event"
},
"time": {
"type": "string",
"format": "time",
"description": "The time of the event"
},
"location": {
"type": "string",
"description": "The location of the event (optional)"
}
},
"required": [
"title",
"date",
"time"
]
}
}
```

## **System Prompt**
You are a helpful assistant with access to the following functions. Use them if required
```json
{
"name": "calculate_shipping_cost",
"description": "Calculate the shipping cost for a package",
"parameters": {
"type": "object",
"properties": {
"weight": {
"type": "number",
"description": "The weight of the package in kilograms"
},
"destination": {
"type": "string",
"description": "The destination address"
}
},
"required": [
"weight",
"destination"
]
}
}
```

|
jak414/lie_detect5_nochat8bit_100steps | jak414 | 2024-01-29T16:03:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-29T16:03:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LoneStriker/natural-functions-3.0bpw-h6-exl2 | LoneStriker | 2024-01-29T16:01:43Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"dataset:glaiveai/glaive-function-calling-v2",
"dataset:LDJnr/Capybara",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-29T16:00:23Z | ---
library_name: transformers
license: apache-2.0
datasets:
- glaiveai/glaive-function-calling-v2
- LDJnr/Capybara
---
# NaturalFunctions-7B
Mistral-7B finetuned for Function Calling
# Examples
## **System Prompt**
You are a helpful assistant with access to the following functions.
Use them if required -
```json
{
"name": "create_event",
"description": "Create a new event in the calendar",
"parameters": {
"type": "object",
"properties": {
"title": {
"type": "string",
"description": "The title of the event"
},
"date": {
"type": "string",
"format": "date",
"description": "The date of the event"
},
"time": {
"type": "string",
"format": "time",
"description": "The time of the event"
},
"location": {
"type": "string",
"description": "The location of the event (optional)"
}
},
"required": [
"title",
"date",
"time"
]
}
}
```

## **System Prompt**
You are a helpful assistant with access to the following functions. Use them if required
```json
{
"name": "calculate_shipping_cost",
"description": "Calculate the shipping cost for a package",
"parameters": {
"type": "object",
"properties": {
"weight": {
"type": "number",
"description": "The weight of the package in kilograms"
},
"destination": {
"type": "string",
"description": "The destination address"
}
},
"required": [
"weight",
"destination"
]
}
}
```

|
RatanRohith/NeuralPizza-7B-V0.3 | RatanRohith | 2024-01-29T15:51:46Z | 52 | 0 | Transformers | [
"Transformers",
"safetensors",
"mistral",
"text-generation",
"transformers",
"fine-tuned",
"language-modeling",
"direct-preference-optimization",
"dataset:Intel/orca_dpo_pairs",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-29T15:43:32Z | ---
library_name: Transformers
tags:
- transformers
- fine-tuned
- language-modeling
- direct-preference-optimization
datasets:
- Intel/orca_dpo_pairs
license: apache-2.0
---
## Model Description
NeuralPizza-7B-V0.3 is a fine-tuned version of the RatanRohith/NeuralPizza-7B-V0.1 model, specialized through Direct Preference Optimization (DPO). It was fine-tuned using the argilla/distilabel-intel-orca-dpo-pairs dataset, focusing on enhancing model performance based on preference comparisons.
## Intended Use
This model is primarily intended for research and experimental applications in language modeling, especially for exploring the Direct Preference Optimization method. It provides insights into the nuances of DPO in the context of language model tuning.
## Training Data
The model was fine-tuned using the argilla/distilabel-intel-orca-dpo-pairs dataset. This dataset is designed for applying and testing Direct Preference Optimization techniques in language models.
## Training Procedure
The training followed the guidelines and methodologies outlined in the "Fine-Tune a Mistral 7B Model with Direct Preference Optimization" guide from Medium's Towards Data Science platform. Specific training regimes and hyperparameters are based on this guide. Here : https://medium.com/towards-data-science/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac
## Limitations and Bias
As an experimental model, it may carry biases inherent from its training data. The model's performance and outputs should be critically evaluated, especially in sensitive and diverse applications. |
LOpeetu/distilbert-base-uncased-finetuned-emotion | LOpeetu | 2024-01-29T15:50:16Z | 92 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-01-27T05:00:09Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1335
- Accuracy: 0.9541
- F1: 0.9514
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1597 | 1.0 | 2051 | 0.1425 | 0.9494 | 0.9481 |
| 0.1181 | 2.0 | 4102 | 0.1335 | 0.9541 | 0.9514 |
### Framework versions
- Transformers 4.36.1
- Pytorch 1.13.1+cu116
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Mariya21/ani-toys-lora | Mariya21 | 2024-01-29T15:48:14Z | 1 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-01-29T13:25:35Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - Mariya21/ani-toys-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the Norod78/cartoon-blip-captions dataset. You can find some example images in the following.





|
CennetOguz/less_100000_xlm_roberta_mmar_recipe_10_base | CennetOguz | 2024-01-29T15:46:01Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-01-29T02:35:31Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: less_100000_xlm_roberta_mmar_recipe_10_base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# less_100000_xlm_roberta_mmar_recipe_10_base
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.1.1+cu121
- Datasets 2.14.4
- Tokenizers 0.13.3
|
google-t5/t5-3b | google-t5 | 2024-01-29T15:44:49Z | 154,822 | 45 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"translation",
"en",
"fr",
"ro",
"de",
"multilingual",
"dataset:c4",
"arxiv:1805.12471",
"arxiv:1708.00055",
"arxiv:1704.05426",
"arxiv:1606.05250",
"arxiv:1808.09121",
"arxiv:1810.12885",
"arxiv:1905.10044",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | translation | 2022-03-02T23:29:04Z | ---
language:
- en
- fr
- ro
- de
- multilingual
license: apache-2.0
tags:
- summarization
- translation
datasets:
- c4
---
# Model Card for T5-3B

# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training Details](#training-details)
5. [Evaluation](#evaluation)
6. [Environmental Impact](#environmental-impact)
7. [Citation](#citation)
8. [Model Card Authors](#model-card-authors)
9. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
The developers of the Text-To-Text Transfer Transformer (T5) [write](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html):
> With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task.
T5-3B is the checkpoint with 3 billion parameters.
- **Developed by:** Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. See [associated paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) and [GitHub repo](https://github.com/google-research/text-to-text-transfer-transformer#released-model-checkpoints)
- **Model type:** Language model
- **Language(s) (NLP):** English, French, Romanian, German
- **License:** Apache 2.0
- **Related Models:** [All T5 Checkpoints](https://huggingface.co/models?search=t5)
- **Resources for more information:**
- [Research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf)
- [Google's T5 Blog Post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
- [GitHub Repo](https://github.com/google-research/text-to-text-transfer-transformer)
- [Hugging Face T5 Docs](https://huggingface.co/docs/transformers/model_doc/t5)
# Uses
## Direct Use and Downstream Use
The developers write in a [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) that the model:
> Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself.
See the [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for further details.
## Out-of-Scope Use
More information needed.
# Bias, Risks, and Limitations
More information needed.
## Recommendations
More information needed.
# Training Details
## Training Data
The model is pre-trained on the [Colossal Clean Crawled Corpus (C4)](https://www.tensorflow.org/datasets/catalog/c4), which was developed and released in the context of the same [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) as T5.
The model was pre-trained on a on a **multi-task mixture of unsupervised (1.) and supervised tasks (2.)**.
Thereby, the following datasets were being used for (1.) and (2.):
1. **Datasets used for Unsupervised denoising objective**:
- [C4](https://huggingface.co/datasets/c4)
- [Wiki-DPR](https://huggingface.co/datasets/wiki_dpr)
2. **Datasets used for Supervised text-to-text language modeling objective**
- Sentence acceptability judgment
- CoLA [Warstadt et al., 2018](https://arxiv.org/abs/1805.12471)
- Sentiment analysis
- SST-2 [Socher et al., 2013](https://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf)
- Paraphrasing/sentence similarity
- MRPC [Dolan and Brockett, 2005](https://aclanthology.org/I05-5002)
- STS-B [Ceret al., 2017](https://arxiv.org/abs/1708.00055)
- QQP [Iyer et al., 2017](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs)
- Natural language inference
- MNLI [Williams et al., 2017](https://arxiv.org/abs/1704.05426)
- QNLI [Rajpurkar et al.,2016](https://arxiv.org/abs/1606.05250)
- RTE [Dagan et al., 2005](https://link.springer.com/chapter/10.1007/11736790_9)
- CB [De Marneff et al., 2019](https://semanticsarchive.net/Archive/Tg3ZGI2M/Marneffe.pdf)
- Sentence completion
- COPA [Roemmele et al., 2011](https://www.researchgate.net/publication/221251392_Choice_of_Plausible_Alternatives_An_Evaluation_of_Commonsense_Causal_Reasoning)
- Word sense disambiguation
- WIC [Pilehvar and Camacho-Collados, 2018](https://arxiv.org/abs/1808.09121)
- Question answering
- MultiRC [Khashabi et al., 2018](https://aclanthology.org/N18-1023)
- ReCoRD [Zhang et al., 2018](https://arxiv.org/abs/1810.12885)
- BoolQ [Clark et al., 2019](https://arxiv.org/abs/1905.10044)
## Training Procedure
In their [abstract](https://jmlr.org/papers/volume21/20-074/20-074.pdf), the model developers write:
> In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks.
The framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for further details.
# Evaluation
## Testing Data, Factors & Metrics
The developers evaluated the model on 24 tasks, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for full details.
## Results
For full results for T5-3B, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf), Table 14.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Google Cloud TPU Pods
- **Hours used:** More information needed
- **Cloud Provider:** GCP
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Citation
**BibTeX:**
```bibtex
@article{2020t5,
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
journal = {Journal of Machine Learning Research},
year = {2020},
volume = {21},
number = {140},
pages = {1-67},
url = {http://jmlr.org/papers/v21/20-074.html}
}
```
**APA:**
- Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
See the [Hugging Face T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Model) docs and a [Colab Notebook](https://colab.research.google.com/github/google-research/text-to-text-transfer-transformer/blob/main/notebooks/t5-trivia.ipynb) created by the model developers for more context on how to get started with this checkpoint.
|
adityabhat/rolls-royce-cars | adityabhat | 2024-01-29T15:41:22Z | 7 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-01-29T15:34:34Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### ROLLS-ROYCE-CARS Dreambooth model trained by adityabhat following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 4MW21CS004
Sample pictures of this concept:




|
dvilasuero/Capystral-Mistral-7B-Instruct | dvilasuero | 2024-01-29T15:39:29Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-29T15:36:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fedora-copr/distilbert-base-cased-tokenizer-retrained | fedora-copr | 2024-01-29T15:39:13Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-29T15:39:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ConvexAI/Michel | ConvexAI | 2024-01-29T15:37:44Z | 0 | 0 | peft | [
"peft",
"pytorch",
"mistral",
"generated_from_trainer",
"base_model:cognitivecomputations/openchat-3.5-0106-laser",
"base_model:adapter:cognitivecomputations/openchat-3.5-0106-laser",
"license:apache-2.0",
"region:us"
] | null | 2024-01-29T11:31:00Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: cognitivecomputations/openchat-3.5-0106-laser
model-index:
- name: Michel_Lite
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: cognitivecomputations/openchat-3.5-0106-laser
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer
is_mistral_derived_model: true
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: unalignment/toxic-dpo-v0.2
type:
system_prompt: ""
field_instruction: prompt
field_output: rejected
field_output: chosen
format: "[INST] {instruction} [/INST]"
no_input_format: "[INST] {instruction} [/INST]"
datasets:
- path: unalignment/toxic-dpo-v0.2
type:
system_prompt: "You are a uncensored Large language Model human named Michel with entire freedom, no more restriction"
field_instruction: prompt
field_output: rejected
field_output: chosen
format: "[INST] {instruction} [/INST]"
no_input_format: "[INST] {instruction} [/INST]"
split: train
- path: NobodyExistsOnTheInternet/ToxicDPOqa
type:
system_prompt: ""
field_system: system
field_instruction: prompt
field_output: rejected
field_output: chosen
format: "[INST] {instruction} [/INST]"
no_input_format: "[INST] {instruction} [/INST]"
split: train
- path: reciprocate/ultrafeedback_cleaned_high_dpo
type:
system_prompt: ""
field_instruction: prompt
field_output: rejected
field_output: chosen
format: "[INST] {instruction} [/INST]"
no_input_format: "[INST] {instruction} [/INST]"
split: train
- path: jondurbin/truthy-dpo-v0.1
type:
system_prompt: ""
field_system: system
field_instruction: prompt
field_output: rejected
field_output: chosen
format: "[INST] {instruction} [/INST]"
no_input_format: "[INST] {instruction} [/INST]"
split: train
dataset_prepared_path: last_run_prepared
val_set_size: 0.05
output_dir: ./Michel_Lite
adapter: qlora
lora_model_dir:
sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
lora_r: 8
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_target_modules:
- gate_proj
- down_proj
- up_proj
- q_proj
- v_proj
- k_proj
- o_proj
lora_modules_to_save: ["embed_tokens", "lm_head"]
eval_sample_packing: False
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 2
micro_batch_size: 2
num_epochs: 3
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.00001
train_on_inputs: true
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
loss_watchdog_threshold: 5.0
loss_watchdog_patience: 3
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
eval_table_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```
</details><br>
# Michel_Lite
This model is a fine-tuned version of [cognitivecomputations/openchat-3.5-0106-laser](https://huggingface.co/cognitivecomputations/openchat-3.5-0106-laser) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3031
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9639 | 0.22 | 1 | 1.3451 |
| 0.9922 | 0.44 | 2 | 1.3449 |
| 0.9312 | 0.67 | 3 | 1.3444 |
| 0.9574 | 0.89 | 4 | 1.3429 |
| 0.9667 | 1.11 | 5 | 1.3410 |
| 0.9146 | 1.11 | 6 | 1.3377 |
| 0.9567 | 1.33 | 7 | 1.3340 |
| 0.9188 | 1.56 | 8 | 1.3293 |
| 0.9174 | 1.78 | 9 | 1.3222 |
| 0.9099 | 2.0 | 10 | 1.3147 |
| 0.8613 | 2.22 | 11 | 1.3059 |
| 0.8368 | 2.22 | 12 | 1.3031 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.37.0
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0 |
afshr/sd-class-butterflies-256 | afshr | 2024-01-29T15:36:01Z | 30 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2024-01-29T15:35:07Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('afshr/sd-class-butterflies-256')
image = pipeline().images[0]
image
```
|
orkg/R0_contribution_IE | orkg | 2024-01-29T15:33:32Z | 94 | 1 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"arxiv:2401.10040",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-28T08:40:36Z | ---
license: mit
widget:
- text: >-
This paper analyses the evolution of COVID 19 disease in Cameroon over the
period March 6 April 2020 using SIR model. Specifically, 1) we evaluate the
basic reproduction number of the virus. 2) Determine the peak of the
infection and the spread-out period of the disease. 3) Simulate the
interventions of public health authorities. Data used in this study is
obtained from the Ministry of Health of Cameroon. The results suggest that
over the period, the reproduction number of the COVID 19 in Cameroon is
about 1.5 and the peak of the infection could occur at the end of May 2020
with about 7.7% of the population infected. Besides, implementation of
efficient public health policies could help flattens the epidemic curve.
language:
- en
---
This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on scholarly articles in virology.
It produces structured summaries in JSON format from paper's context (title and abstract) for the research theme "R0 value of infectious diseases".
### Resources for more information:
- [Research paper](https://arxiv.org/abs/2401.10040)
- [GitHub Repo](https://github.com/mahsaSH717/r0-estimates.git) |
tanatapanun/fine-tuned-BioBARTv2-20-epochs-1024-input-256-output | tanatapanun | 2024-01-29T15:32:28Z | 89 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-01-29T14:58:53Z | ---
base_model: checkpoint_global_step_200000
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: fine-tuned-BioBARTv2-20-epochs-1024-input-256-output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-BioBARTv2-20-epochs-1024-input-256-output
This model is a fine-tuned version of [checkpoint_global_step_200000](https://huggingface.co/checkpoint_global_step_200000) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9612
- Rouge1: 0.2101
- Rouge2: 0.0415
- Rougel: 0.1705
- Rougelsum: 0.1695
- Gen Len: 40.3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 151 | 6.4708 | 0.1075 | 0.0008 | 0.079 | 0.0796 | 69.35 |
| No log | 2.0 | 302 | 1.1525 | 0.1053 | 0.0308 | 0.0902 | 0.0909 | 22.16 |
| No log | 3.0 | 453 | 0.9813 | 0.02 | 0.0 | 0.02 | 0.02 | 8.0 |
| 4.2222 | 4.0 | 604 | 0.9282 | 0.1047 | 0.0244 | 0.0791 | 0.0779 | 27.31 |
| 4.2222 | 5.0 | 755 | 0.8997 | 0.166 | 0.0316 | 0.1257 | 0.1248 | 73.57 |
| 4.2222 | 6.0 | 906 | 0.8912 | 0.1626 | 0.0288 | 0.1342 | 0.1327 | 37.82 |
| 0.7695 | 7.0 | 1057 | 0.8761 | 0.1756 | 0.0396 | 0.1377 | 0.1384 | 39.43 |
| 0.7695 | 8.0 | 1208 | 0.8787 | 0.155 | 0.0292 | 0.1225 | 0.1237 | 33.5 |
| 0.7695 | 9.0 | 1359 | 0.8834 | 0.1538 | 0.0268 | 0.1286 | 0.1276 | 27.75 |
| 0.5376 | 10.0 | 1510 | 0.8855 | 0.1924 | 0.0376 | 0.1542 | 0.1533 | 36.22 |
| 0.5376 | 11.0 | 1661 | 0.8915 | 0.1997 | 0.037 | 0.1607 | 0.1611 | 41.79 |
| 0.5376 | 12.0 | 1812 | 0.9022 | 0.1907 | 0.0289 | 0.1517 | 0.1533 | 43.94 |
| 0.5376 | 13.0 | 1963 | 0.9101 | 0.2054 | 0.0412 | 0.1603 | 0.1598 | 45.65 |
| 0.3808 | 14.0 | 2114 | 0.9172 | 0.1905 | 0.0405 | 0.1436 | 0.1441 | 44.98 |
| 0.3808 | 15.0 | 2265 | 0.9382 | 0.2121 | 0.0499 | 0.1729 | 0.1745 | 38.68 |
| 0.3808 | 16.0 | 2416 | 0.9437 | 0.1875 | 0.0387 | 0.1483 | 0.1482 | 37.93 |
| 0.2761 | 17.0 | 2567 | 0.9466 | 0.1935 | 0.0393 | 0.1513 | 0.1513 | 42.51 |
| 0.2761 | 18.0 | 2718 | 0.9518 | 0.2102 | 0.0478 | 0.1651 | 0.1664 | 40.2 |
| 0.2761 | 19.0 | 2869 | 0.9603 | 0.2074 | 0.0396 | 0.1691 | 0.1686 | 37.55 |
| 0.2257 | 20.0 | 3020 | 0.9612 | 0.2101 | 0.0415 | 0.1705 | 0.1695 | 40.3 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.12.1+cu113
- Datasets 2.16.1
- Tokenizers 0.15.1
|
nz/pythia-2.8b-rm-full-hh-rlhf | nz | 2024-01-29T15:31:19Z | 18 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2024-01-29T15:28:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nz/pythia-410m-rm-full-hh-rlhf | nz | 2024-01-29T15:25:09Z | 18 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2024-01-29T15:24:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aniltepe/bert-base-turkish-finetuned-nli | aniltepe | 2024-01-29T15:24:18Z | 99 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:dbmdz/bert-base-turkish-cased",
"base_model:finetune:dbmdz/bert-base-turkish-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-01-22T22:08:49Z | ---
license: mit
base_model: dbmdz/bert-base-turkish-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-turkish-finetuned-nli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-turkish-finetuned-nli
This model is a fine-tuned version of [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8256
- Accuracy: 0.7192
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.7372 | 1.0 | 4594 | 0.6946 | 0.7048 |
| 0.5846 | 2.0 | 9188 | 0.7168 | 0.7141 |
| 0.4039 | 3.0 | 13782 | 0.8256 | 0.7192 |
| 0.2594 | 4.0 | 18376 | 1.0364 | 0.7150 |
| 0.2031 | 5.0 | 22970 | 1.3181 | 0.7139 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
nz/pythia-70m-rm-full-hh-rlhf | nz | 2024-01-29T15:23:59Z | 18 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2024-01-29T15:23:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hunflair/biosyn-sapbert-bc2gn | hunflair | 2024-01-29T15:19:46Z | 131 | 0 | flair | [
"flair",
"pytorch",
"entity-mention-linker",
"region:us"
] | null | 2024-01-29T11:20:19Z | ---
tags:
- flair
- entity-mention-linker
---
## bioasyn-sapbert-bc2gn-gene
Biomedical Entity Mention Linking for gene:
- Model: [dmis-lab/biosyn-sapbert-bc2gn](https://huggingface.co/dmis-lab/biosyn-sapbert-bc2gn)
- Dictionary: [NCBI Gene](https://www.ncbi.nlm.nih.gov/gene) ([Homo_sapiens.gene_info.gz](https://ftp.ncbi.nih.gov/gene/DATA/GENE_INFO/Mammalia/))
### Demo: How to use in Flair
Requires:
- **[Flair](https://github.com/flairNLP/flair/)>=0.14.0** (`pip install flair` or `pip install git+https://github.com/flairNLP/flair.git`)
```python
from flair.data import Sentence
from flair.models import Classifier, EntityMentionLinker
from flair.tokenization import SciSpacyTokenizer
sentence = Sentence(
"The mutation in the ABCD1 gene causes X-linked adrenoleukodystrophy, "
"a neurodegenerative disease, which is exacerbated by exposure to high "
"levels of mercury in dolphin populations.",
use_tokenizer=SciSpacyTokenizer()
)
# load hunflair to detect the entity mentions we want to link.
tagger = Classifier.load("hunflair-gene")
tagger.predict(sentence)
# load the linker and dictionary
linker = EntityMentionLinker.load("gene-linker")
linker.predict(sentence)
# print the results for each entity mention:
for span in sentence.get_spans(tagger.label_type):
for link in span.get_labels(linker.label_type):
print(f"{span.text} -> {link.value}")
```
As an alternative to downloading the already precomputed model (much storage). You can also build the model
and compute the embeddings for the dataset using:
```python
linker = EntityMentionLinker.build("dmis-lab/biosyn-sapbert-bc2gn", dictionary_name_or_path="ncbi-gene", hybrid_search=False)
```
This will reduce the download requirements, at the cost of computation.
|
JorgePVNV/modelaso_test | JorgePVNV | 2024-01-29T15:19:32Z | 3 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:pysentimiento/roberta-es-sentiment",
"base_model:finetune:pysentimiento/roberta-es-sentiment",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-01-29T12:02:40Z | ---
base_model: pysentimiento/roberta-es-sentiment
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: modelaso_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# modelaso_test
This model is a fine-tuned version of [pysentimiento/roberta-es-sentiment](https://huggingface.co/pysentimiento/roberta-es-sentiment) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0206
- F1: 0.8165
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 9
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0007 | 1.0 | 21 | 2.5890 | 0.7191 |
| 0.0357 | 2.0 | 42 | 1.9822 | 0.8070 |
| 0.0001 | 3.0 | 63 | 2.0206 | 0.8165 |
| 0.0 | 4.0 | 84 | 2.2104 | 0.7580 |
| 0.0 | 5.0 | 105 | 2.2600 | 0.7580 |
| 0.0 | 6.0 | 126 | 2.2585 | 0.7580 |
| 0.0 | 7.0 | 147 | 2.2720 | 0.7580 |
| 0.0 | 8.0 | 168 | 2.2812 | 0.7580 |
| 0.0 | 9.0 | 189 | 2.2841 | 0.7580 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
prashantgpt91/decicoder-finetuned-gutenberg | prashantgpt91 | 2024-01-29T15:16:15Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:Deci/DeciCoder-1b",
"base_model:finetune:Deci/DeciCoder-1b",
"license:apache-2.0",
"region:us"
] | null | 2024-01-29T09:35:19Z | ---
license: apache-2.0
base_model: Deci/DeciCoder-1b
tags:
- generated_from_trainer
model-index:
- name: decicoder-finetuned-gutenberg
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# decicoder-finetuned-gutenberg
This model is a fine-tuned version of [Deci/DeciCoder-1b](https://huggingface.co/Deci/DeciCoder-1b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 10
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.1
|
SharonTudi/DIALOGUE_one | SharonTudi | 2024-01-29T15:11:11Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-cased",
"base_model:finetune:distilbert/distilbert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-01-18T11:11:14Z | ---
license: apache-2.0
base_model: distilbert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: DIALOGUE_one
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DIALOGUE_one
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2052
- Precision: 0.9762
- Recall: 0.9737
- F1: 0.9736
- Accuracy: 0.9737
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 1.2045 | 0.62 | 30 | 0.7843 | 0.9565 | 0.9474 | 0.9468 | 0.9474 |
| 0.5845 | 1.25 | 60 | 0.2507 | 0.9524 | 0.9474 | 0.9472 | 0.9474 |
| 0.23 | 1.88 | 90 | 0.1376 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.0722 | 2.5 | 120 | 0.0647 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.0515 | 3.12 | 150 | 0.1376 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.0197 | 3.75 | 180 | 0.1505 | 0.9637 | 0.9605 | 0.9604 | 0.9605 |
| 0.0065 | 4.38 | 210 | 0.1456 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.0046 | 5.0 | 240 | 0.1376 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.0037 | 5.62 | 270 | 0.1569 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.0028 | 6.25 | 300 | 0.1551 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.0024 | 6.88 | 330 | 0.1594 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.0022 | 7.5 | 360 | 0.1624 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.0018 | 8.12 | 390 | 0.1687 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.0016 | 8.75 | 420 | 0.1698 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.0014 | 9.38 | 450 | 0.1732 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.0013 | 10.0 | 480 | 0.1741 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.0012 | 10.62 | 510 | 0.1772 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.0011 | 11.25 | 540 | 0.1791 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.001 | 11.88 | 570 | 0.1814 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.001 | 12.5 | 600 | 0.1840 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.0008 | 13.12 | 630 | 0.1858 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.0009 | 13.75 | 660 | 0.1877 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.0008 | 14.38 | 690 | 0.1893 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.0007 | 15.0 | 720 | 0.1902 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.0007 | 15.62 | 750 | 0.1908 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.0007 | 16.25 | 780 | 0.1931 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.0006 | 16.88 | 810 | 0.1936 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.0006 | 17.5 | 840 | 0.1946 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.0006 | 18.12 | 870 | 0.1961 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.0006 | 18.75 | 900 | 0.1966 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.0005 | 19.38 | 930 | 0.1965 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.0005 | 20.0 | 960 | 0.1968 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.0005 | 20.62 | 990 | 0.1974 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.0005 | 21.25 | 1020 | 0.1987 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.0005 | 21.88 | 1050 | 0.1995 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.0005 | 22.5 | 1080 | 0.2001 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.0005 | 23.12 | 1110 | 0.2010 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.0004 | 23.75 | 1140 | 0.2018 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.0004 | 24.38 | 1170 | 0.2021 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.0004 | 25.0 | 1200 | 0.2025 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.0004 | 25.62 | 1230 | 0.2034 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.0004 | 26.25 | 1260 | 0.2038 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.0004 | 26.88 | 1290 | 0.2042 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.0004 | 27.5 | 1320 | 0.2047 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.0004 | 28.12 | 1350 | 0.2048 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.0004 | 28.75 | 1380 | 0.2050 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.0004 | 29.38 | 1410 | 0.2051 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.0004 | 30.0 | 1440 | 0.2052 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
### Framework versions
- Transformers 4.37.1
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
wahaha1987/rl_course_vizdoom_health_gathering_supreme | wahaha1987 | 2024-01-29T15:10:13Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-01-29T15:09:54Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 14.03 +/- 4.96
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r wahaha1987/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
hunflair/biosyn-sapbert-ncbi-disease | hunflair | 2024-01-29T15:07:53Z | 68 | 0 | flair | [
"flair",
"pytorch",
"entity-mention-linker",
"region:us"
] | null | 2024-01-26T16:48:43Z | ---
tags:
- flair
- entity-mention-linker
---
## biosyn-sapbert-ncbi-disease
Biomedical Entity Mention Linking for diseases:
- Model: [dmis-lab/biosyn-sapbert-ncbi-disease](https://huggingface.co/dmis-lab/biosyn-sapbert-ncbi-disease)
- Dictionary: [CTD Diseases](https://ctdbase.org/help/diseaseDetailHelp.jsp) (See [License](https://ctdbase.org/about/legal.jsp))
### Demo: How to use in Flair
Requires:
- **[Flair](https://github.com/flairNLP/flair/)>=0.14.0** (`pip install flair` or `pip install git+https://github.com/flairNLP/flair.git`)
```python
from flair.data import Sentence
from flair.models import Classifier, EntityMentionLinker
from flair.tokenization import SciSpacyTokenizer
sentence = Sentence(
"The mutation in the ABCD1 gene causes X-linked adrenoleukodystrophy, "
"a neurodegenerative disease, which is exacerbated by exposure to high "
"levels of mercury in dolphin populations.",
use_tokenizer=SciSpacyTokenizer()
)
# load hunflair to detect the entity mentions we want to link.
tagger = Classifier.load("hunflair-disease")
tagger.predict(sentence)
# load the linker and dictionary
linker = EntityMentionLinker.load("hunflair/biosyn-sapbert-ncbi-disease")
dictionary = linker.dictionary
# find then candidates for the mentions
linker.predict(sentence)
# print the results for each entity mention:
for span in sentence.get_spans(tagger.label_type):
for link in span.get_labels(linker.label_type):
print(f"{span.text} -> {link.value}")
```
As an alternative to downloading the already precomputed model (much storage). You can also build the model
and compute the embeddings for the dataset using:
```python
linker = EntityMentionLinker.build("dmis-lab/biosyn-biobert-ncbi-disease", dictionary_name_or_path="ctd-diseases", hybrid_search=True)
```
This will reduce the download requirements, at the cost of computation.
|
SharonTudi/CONTEXT_one | SharonTudi | 2024-01-29T14:56:42Z | 92 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-cased",
"base_model:finetune:distilbert/distilbert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-01-25T16:54:31Z | ---
license: apache-2.0
base_model: distilbert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: CONTEXT_one
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CONTEXT_one
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2483
- Precision: 0.8152
- Recall: 0.8158
- F1: 0.8141
- Accuracy: 0.8158
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 1.3127 | 0.62 | 30 | 1.1497 | 0.6818 | 0.5263 | 0.4785 | 0.5263 |
| 0.8414 | 1.25 | 60 | 0.8096 | 0.7479 | 0.75 | 0.7472 | 0.75 |
| 0.597 | 1.88 | 90 | 0.6579 | 0.7904 | 0.7895 | 0.7873 | 0.7895 |
| 0.4417 | 2.5 | 120 | 0.5761 | 0.8072 | 0.8026 | 0.8026 | 0.8026 |
| 0.3041 | 3.12 | 150 | 0.6691 | 0.7665 | 0.7632 | 0.7598 | 0.7632 |
| 0.2384 | 3.75 | 180 | 0.6736 | 0.7717 | 0.7632 | 0.7645 | 0.7632 |
| 0.28 | 4.38 | 210 | 0.7949 | 0.7602 | 0.7632 | 0.7574 | 0.7632 |
| 0.22 | 5.0 | 240 | 0.8305 | 0.7917 | 0.7895 | 0.7879 | 0.7895 |
| 0.1427 | 5.62 | 270 | 0.7339 | 0.8041 | 0.8026 | 0.8025 | 0.8026 |
| 0.1875 | 6.25 | 300 | 0.7198 | 0.8031 | 0.7895 | 0.7909 | 0.7895 |
| 0.1216 | 6.88 | 330 | 0.7462 | 0.8315 | 0.8289 | 0.8287 | 0.8289 |
| 0.0895 | 7.5 | 360 | 0.8646 | 0.8070 | 0.8026 | 0.8006 | 0.8026 |
| 0.0758 | 8.12 | 390 | 1.0129 | 0.7883 | 0.7632 | 0.7642 | 0.7632 |
| 0.0636 | 8.75 | 420 | 0.9161 | 0.7893 | 0.7895 | 0.7866 | 0.7895 |
| 0.0239 | 9.38 | 450 | 0.9354 | 0.7409 | 0.7368 | 0.7367 | 0.7368 |
| 0.0449 | 10.0 | 480 | 1.0156 | 0.7994 | 0.8026 | 0.7980 | 0.8026 |
| 0.0089 | 10.62 | 510 | 0.9735 | 0.8125 | 0.8158 | 0.8125 | 0.8158 |
| 0.0348 | 11.25 | 540 | 1.0077 | 0.7867 | 0.7895 | 0.7848 | 0.7895 |
| 0.0037 | 11.88 | 570 | 1.1631 | 0.7868 | 0.7895 | 0.7857 | 0.7895 |
| 0.0022 | 12.5 | 600 | 1.1037 | 0.7998 | 0.8026 | 0.7993 | 0.8026 |
| 0.026 | 13.12 | 630 | 1.0309 | 0.8152 | 0.8158 | 0.8141 | 0.8158 |
| 0.0118 | 13.75 | 660 | 1.0360 | 0.8152 | 0.8158 | 0.8141 | 0.8158 |
| 0.0125 | 14.38 | 690 | 1.2095 | 0.7867 | 0.7895 | 0.7848 | 0.7895 |
| 0.0158 | 15.0 | 720 | 1.0658 | 0.8152 | 0.8158 | 0.8141 | 0.8158 |
| 0.0072 | 15.62 | 750 | 1.1267 | 0.7708 | 0.7763 | 0.7688 | 0.7763 |
| 0.0015 | 16.25 | 780 | 1.1247 | 0.8152 | 0.8158 | 0.8141 | 0.8158 |
| 0.0018 | 16.88 | 810 | 1.1386 | 0.8152 | 0.8158 | 0.8141 | 0.8158 |
| 0.0013 | 17.5 | 840 | 1.1468 | 0.8152 | 0.8158 | 0.8141 | 0.8158 |
| 0.0011 | 18.12 | 870 | 1.1692 | 0.8152 | 0.8158 | 0.8141 | 0.8158 |
| 0.0013 | 18.75 | 900 | 1.1734 | 0.8152 | 0.8158 | 0.8141 | 0.8158 |
| 0.0011 | 19.38 | 930 | 1.1857 | 0.8152 | 0.8158 | 0.8141 | 0.8158 |
| 0.001 | 20.0 | 960 | 1.1890 | 0.8152 | 0.8158 | 0.8141 | 0.8158 |
| 0.001 | 20.62 | 990 | 1.1924 | 0.8152 | 0.8158 | 0.8141 | 0.8158 |
| 0.0009 | 21.25 | 1020 | 1.2005 | 0.8152 | 0.8158 | 0.8141 | 0.8158 |
| 0.0009 | 21.88 | 1050 | 1.2084 | 0.8152 | 0.8158 | 0.8141 | 0.8158 |
| 0.0009 | 22.5 | 1080 | 1.2216 | 0.8152 | 0.8158 | 0.8141 | 0.8158 |
| 0.0009 | 23.12 | 1110 | 1.2237 | 0.8152 | 0.8158 | 0.8141 | 0.8158 |
| 0.0008 | 23.75 | 1140 | 1.2231 | 0.8152 | 0.8158 | 0.8141 | 0.8158 |
| 0.0008 | 24.38 | 1170 | 1.2286 | 0.8152 | 0.8158 | 0.8141 | 0.8158 |
| 0.0008 | 25.0 | 1200 | 1.2312 | 0.8152 | 0.8158 | 0.8141 | 0.8158 |
| 0.0008 | 25.62 | 1230 | 1.2325 | 0.8152 | 0.8158 | 0.8141 | 0.8158 |
| 0.0008 | 26.25 | 1260 | 1.2362 | 0.8152 | 0.8158 | 0.8141 | 0.8158 |
| 0.0008 | 26.88 | 1290 | 1.2415 | 0.8152 | 0.8158 | 0.8141 | 0.8158 |
| 0.0007 | 27.5 | 1320 | 1.2462 | 0.8152 | 0.8158 | 0.8141 | 0.8158 |
| 0.0008 | 28.12 | 1350 | 1.2471 | 0.8152 | 0.8158 | 0.8141 | 0.8158 |
| 0.0007 | 28.75 | 1380 | 1.2466 | 0.8152 | 0.8158 | 0.8141 | 0.8158 |
| 0.0007 | 29.38 | 1410 | 1.2478 | 0.8152 | 0.8158 | 0.8141 | 0.8158 |
| 0.0007 | 30.0 | 1440 | 1.2483 | 0.8152 | 0.8158 | 0.8141 | 0.8158 |
### Framework versions
- Transformers 4.37.1
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
neuralmagic/TinyLlama-1.1B-Chat-v0.4-pruned50-quant-ds | neuralmagic | 2024-01-29T14:56:23Z | 52 | 0 | transformers | [
"transformers",
"onnx",
"llama",
"text-generation",
"deepsparse",
"arxiv:2301.00774",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v0.4",
"base_model:quantized:TinyLlama/TinyLlama-1.1B-Chat-v0.4",
"autotrain_compatible",
"region:us"
] | text-generation | 2023-11-20T19:10:02Z | ---
base_model: TinyLlama/TinyLlama-1.1B-Chat-v0.4
inference: false
model_type: llama
prompt_template: |
<|im_start|>user\n
{prompt}<|im_end|>\n
<|im_start|>assistant\n
quantized_by: mwitiderrick
tags:
- deepsparse
---
## TinyLlama 1.1B Chat 0.4 - DeepSparse
This repo contains model files for [TinyLlama 1.1B Chat](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.4) optimized for [DeepSparse](https://github.com/neuralmagic/deepsparse), a CPU inference runtime for sparse models.
This model was quantized and pruned with [SparseGPT](https://arxiv.org/abs/2301.00774), using [SparseML](https://github.com/neuralmagic/sparseml).
## Inference
Install [DeepSparse LLM](https://github.com/neuralmagic/deepsparse) for fast inference on CPUs:
```bash
pip install deepsparse-nightly[llm]
```
Run in a [Python pipeline](https://github.com/neuralmagic/deepsparse/blob/main/docs/llms/text-generation-pipeline.md):
```python
from deepsparse import TextGeneration
prompt = "How to make banana bread?"
formatted_prompt = f"<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n"
model = TextGeneration(model="hf:neuralmagic/TinyLlama-1.1B-Chat-v0.4-pruned50-quant-ds")
print(model(formatted_prompt, max_new_tokens=500).generations[0].text)
"""
Banana bread is a delicious and easy-to-make recipe that is sure to please. Here is a recipe for making banana bread:
Ingredients:
For the Banana Bread:
- 1 cup of sugar
- 1 cup of flour
- 1/2 cup of mashed bananas
- 1/4 cup of milk
- 1/2 cup of melted butter
- 1/4 cup of baking powder
- 1/4 cup of baking soda
- 1/4 cup of eggs
- 1/4 cup of milk
- 1/4 cup of sugar
Instructions:
1. Preheat the oven to 325°F (160°C).
2. In a large bowl, combine the sugar and flour.
3. In a separate bow, combine the mashed bananas, milk, butter, baking powder, baking soda, milk, sugar.
4. Add the bananas and milk into the flour-sugar mixture.
5. Pour the milk into the bowl of the flour-sugar mixture.
6. Pour the baking powder into the bowl of the flour-sugar mixture.
7. Pour the mashed bananas into the bowl of the flour-sugar mixture.
8. Add the eggs into the bowl of the flour-sugar mixture.
9. Stir the mixture until it becomes a dough.
10. Grease a 9-inch (23 cm) square pan.
11. Pour the mixture into the pan.
12. Bake the banana bread in the oven for 40 minutes.
13. Remove the banana bread from the oven and cool it.
14. Cut the bread into 16 pieces.
15. Make the glaze:
16. Sprinkle the sugar over the bread.
17. Bake the bread in the oven for 30 minutes.
"""
```
## Prompt template
```
<|im_start|>user\n
{prompt}<|im_end|>\n
<|im_start|>assistant\n
```
## Sparsification
For details on how this model was sparsified, see the `recipe.yaml` in this repo and follow the instructions below.
```bash
git clone https://github.com/neuralmagic/sparseml
pip install -e "sparseml[transformers]"
wget https://huggingface.co/neuralmagic/TinyLlama-1.1B-Chat-v0.4-pruned50-quant/raw/main/recipe.yaml # download recipe
python sparseml/src/sparseml/transformers/sparsification/obcq/obcq.py TinyLlama/TinyLlama-1.1B-Chat-v0.4 open_platypus --recipe recipe.yaml --save True
python sparseml/src/sparseml/transformers/sparsification/obcq/export.py --task text-generation --model_path obcq_deployment
cp deployment/model.onnx deployment/model-orig.onnx
```
Run this kv-cache injection to speed up the model at inference by caching the Key and Value states:
```python
import os
import onnx
from sparseml.exporters.kv_cache_injector import KeyValueCacheInjector
input_file = "deployment/model-orig.onnx"
output_file = "deployment/model.onnx"
model = onnx.load(input_file, load_external_data=False)
model = KeyValueCacheInjector(model_path=os.path.dirname(input_file)).apply(model)
onnx.save(model, output_file)
print(f"Modified model saved to: {output_file}")
```
Follow the instructions on our [One Shot With SparseML](https://github.com/neuralmagic/sparseml/tree/main/src/sparseml/transformers/sparsification/obcq) page for a step-by-step guide for performing one-shot quantization of large language models.
## Slack
For further support, and discussions on these models and AI in general, join [Neural Magic's Slack Community](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ) |
CultureBERT/roberta-large-adhocracy | CultureBERT | 2024-01-29T14:47:55Z | 89 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"arxiv:1907.11692",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-26T16:43:28Z | ---
license: cc-by-nc-4.0
---
This model is a fine-tuned version of RoBERTa-large [1]. It was trained on 1,400 employee reviews to measure corporate culture. More specifically, it measures the **culture dimension “adhocracy”** of the Competing Values Framework [2,3]. An organization that exhibits an adhocracy culture is characterized by an emphasis to **create** [2].
The model assigns one of three possible labels:
0 (**neutral**): Text does not allow any inference about an adhocracy culture. <br />
1 (**positive**): Text contains information in line with an adhocracy culture. <br />
2 (**negative**): Text contains information in opposite to an adhocracy culture. <br />
For details on the model and its performance, see Koch and Pasch (2023). Please cite this article when using the model: <br />
S. Koch and S. Pasch, "CultureBERT: Measuring Corporate Culture With Transformer-Based Language Models," in 2023 IEEE International Conference on Big Data (BigData), pp. 3176-3184. doi: 10.1109/BigData59044.2023.10386765
Please see the following **tutorial** on how to apply CultureBERT to measure corporate culture in your own text documents: https://github.com/Stefan-Pasch/CultureBERT
Other References:
[1] Liu, Y.; Ott, M.; Goyal, N.; Du, J.; Joshi, M.; Chen, D.; ... & Stoyanov, V. (2019). Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
[2] Cameron, Kim S.; Quinn, Robert E. (2011): Diagnosing and Changing Organizational Culture. Based on the Competing Values Framework. 3rd ed. San Francisco (CA): Jossey-Bass.
[3] Quinn, Robert E.; Rohrbaugh, John (1983): A Spatial Model of Effectiveness Criteria: Towards a Competing Values Approach to Organizational Analysis. In Management Science 29 (3), pp. 363–377. DOI: 10.1287/mnsc.29.3.363.
|
CultureBERT/roberta-large-dominant-culture | CultureBERT | 2024-01-29T14:40:01Z | 100 | 4 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"arxiv:1907.11692",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-20T20:09:50Z | ---
license: cc-by-nc-4.0
---
This model is a fine-tuned version of RoBERTa-large [1]. It was trained on 1,400 employee reviews to measure corporate culture. More specifically, it predicts which of the four culture dimensions of the Competing Values Framework [2,3] best fits the text at hands, i.e., what is the **dominant culture**.
The model assigns one of four possible labels:
0 (**clan**): Text is best described by a **clan (collaborating) culture**. <br />
1 (**adhocracy**): Text is best described by an **adhocracy (creating) culture**.<br />
2 (**market**): Text is best described by a **market (competing) culture**.<br />
3 (**hierarchy**): Text is best described by a **hierarchy (controlling) culture**.<br />
For details on the model and its performance, see Koch and Pasch (2023). Please cite this article when using the model: <br />
S. Koch and S. Pasch, "CultureBERT: Measuring Corporate Culture With Transformer-Based Language
Models," in 2023 IEEE International Conference on Big Data (BigData), pp. 3176-3184. doi: 10.1109/BigData59044.2023.10386765
Please see the following **tutorial** on how to apply CultureBERT to measure corporate culture in your own text documents: https://github.com/Stefan-Pasch/CultureBERT
Other References:
[1] Liu, Y.; Ott, M.; Goyal, N.; Du, J.; Joshi, M.; Chen, D.; ... & Stoyanov, V. (2019). Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
[2] Cameron, Kim S.; Quinn, Robert E. (2011): Diagnosing and Changing Organizational Culture. Based on the Competing Values Framework. 3rd ed. San Francisco (CA): Jossey-Bass.
[3] Quinn, Robert E.; Rohrbaugh, John (1983): A Spatial Model of Effectiveness Criteria: Towards a Competing Values Approach to Organizational Analysis. In Management Science 29 (3), pp. 363–377. DOI: 10.1287/mnsc.29.3.363.
|
daniel-sf/dpo_test_3 | daniel-sf | 2024-01-29T14:37:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-29T14:37:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gokaygokay/phi2_desctoprompt | gokaygokay | 2024-01-29T14:23:02Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2024-01-29T14:14:04Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
SalomonMetre13/nnd_fr_mt_v3 | SalomonMetre13 | 2024-01-29T14:08:35Z | 332 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"translation",
"nnd",
"dataset:SalomonMetre13/nnd_fr_14k",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | translation | 2023-11-14T14:51:35Z | ---
license: mit
language:
- nnd
datasets:
- SalomonMetre13/nnd_fr_14k
metrics:
- bleu
library_name: transformers
pipeline_tag: translation
---
This is a <span style="color:teal;">machine translation model </span> that aims to translate <a href="https://en.wikipedia.org/wiki/Nande_language" style="color:green;">Nande</a> to French. The model is a result of the fine-tuning of the <span style="color:teal;">t5-base</span> pretrained model on a <a href="https://huggingface.co/datasets/SalomonMetre13/nnd_fr_26k" style="color:green;">Nande-French parallel corpus</a>. The model achieved a BLEU score of <span style="color:teal;"> 29.9 <span> |
weifeng1994/ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan | weifeng1994 | 2024-01-29T14:06:17Z | 3 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:MIT/ast-finetuned-audioset-10-10-0.4593",
"base_model:finetune:MIT/ast-finetuned-audioset-10-10-0.4593",
"license:bsd-3-clause",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2024-01-28T09:12:05Z | ---
license: bsd-3-clause
base_model: MIT/ast-finetuned-audioset-10-10-0.4593
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.87
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5965
- Accuracy: 0.87
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0357 | 1.0 | 225 | 0.8347 | 0.77 |
| 0.6592 | 2.0 | 450 | 0.6398 | 0.83 |
| 0.8437 | 3.0 | 675 | 0.8383 | 0.8 |
| 0.0105 | 4.0 | 900 | 0.5734 | 0.86 |
| 0.0124 | 5.0 | 1125 | 0.9027 | 0.86 |
| 0.0001 | 6.0 | 1350 | 0.6319 | 0.85 |
| 0.128 | 7.0 | 1575 | 0.5766 | 0.89 |
| 0.0001 | 8.0 | 1800 | 0.5357 | 0.87 |
| 0.0001 | 9.0 | 2025 | 0.5984 | 0.87 |
| 0.0 | 10.0 | 2250 | 0.5965 | 0.87 |
### Framework versions
- Transformers 4.37.1
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
golesheed/whisper-non-native-children-3-dutch | golesheed | 2024-01-29T14:00:29Z | 52 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"nl",
"base_model:openai/whisper-large-v2",
"base_model:finetune:openai/whisper-large-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-01-29T13:04:51Z | ---
language:
- nl
license: apache-2.0
base_model: openai/whisper-large-v2
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper Large V2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large V2
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3167
- Wer: 11.6330
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.6772 | 0.71 | 30 | 0.3209 | 16.8396 |
| 0.2613 | 1.43 | 60 | 0.3041 | 12.8163 |
| 0.1683 | 2.14 | 90 | 0.2908 | 11.8332 |
| 0.0777 | 2.86 | 120 | 0.2916 | 10.8138 |
| 0.0428 | 3.57 | 150 | 0.2965 | 11.7786 |
| 0.0228 | 4.29 | 180 | 0.3114 | 11.8150 |
| 0.0107 | 5.0 | 210 | 0.3167 | 11.6330 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.15.0
|
NiklasV/Reinforce-CartPole-v1 | NiklasV | 2024-01-29T13:59:08Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2024-01-29T13:58:56Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 478.00 +/- 66.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
DominoPizza/result-first | DominoPizza | 2024-01-29T13:58:24Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-29T13:54:48Z | ---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
Kavya26/my-pet-pigeon | Kavya26 | 2024-01-29T13:57:10Z | 2 | 1 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-01-29T13:53:01Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Pigeon Dreambooth model trained by Kavya26 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 23WU0101013
Sample pictures of this concept:
.jpeg)
.jpeg)
.jpeg)
.jpeg)
|
tanatapanun/fine-tuned-BioBARTv2-20-epochs-1024-input-224-output | tanatapanun | 2024-01-29T13:53:15Z | 90 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-01-29T13:20:03Z | ---
base_model: checkpoint_global_step_200000
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: fine-tuned-BioBARTv2-20-epochs-1024-input-224-output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-BioBARTv2-20-epochs-1024-input-224-output
This model is a fine-tuned version of [checkpoint_global_step_200000](https://huggingface.co/checkpoint_global_step_200000) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0774
- Rouge1: 0.1615
- Rouge2: 0.0329
- Rougel: 0.1253
- Rougelsum: 0.1254
- Gen Len: 37.68
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 151 | 6.4893 | 0.0304 | 0.0 | 0.0289 | 0.0289 | 22.56 |
| No log | 2.0 | 302 | 1.2747 | 0.1131 | 0.0288 | 0.0929 | 0.0941 | 23.46 |
| No log | 3.0 | 453 | 1.1070 | 0.0619 | 0.0168 | 0.0566 | 0.0547 | 13.19 |
| 4.2833 | 4.0 | 604 | 1.0413 | 0.1159 | 0.0342 | 0.0824 | 0.0834 | 42.72 |
| 4.2833 | 5.0 | 755 | 1.0087 | 0.1593 | 0.0354 | 0.1241 | 0.1239 | 41.02 |
| 4.2833 | 6.0 | 906 | 0.9982 | 0.1636 | 0.0333 | 0.1217 | 0.1222 | 45.53 |
| 0.8472 | 7.0 | 1057 | 0.9873 | 0.1457 | 0.0376 | 0.1054 | 0.1052 | 43.69 |
| 0.8472 | 8.0 | 1208 | 0.9853 | 0.1464 | 0.0345 | 0.1154 | 0.1151 | 32.91 |
| 0.8472 | 9.0 | 1359 | 0.9952 | 0.1645 | 0.0315 | 0.1323 | 0.1327 | 29.82 |
| 0.5885 | 10.0 | 1510 | 0.9992 | 0.1904 | 0.05 | 0.1443 | 0.1427 | 48.5 |
| 0.5885 | 11.0 | 1661 | 1.0141 | 0.1639 | 0.0342 | 0.1264 | 0.1271 | 37.69 |
| 0.5885 | 12.0 | 1812 | 1.0188 | 0.1793 | 0.036 | 0.141 | 0.1401 | 40.51 |
| 0.5885 | 13.0 | 1963 | 1.0351 | 0.1658 | 0.0446 | 0.1355 | 0.1335 | 32.91 |
| 0.4129 | 14.0 | 2114 | 1.0391 | 0.1719 | 0.0348 | 0.1325 | 0.1317 | 39.64 |
| 0.4129 | 15.0 | 2265 | 1.0484 | 0.1756 | 0.0414 | 0.1354 | 0.1349 | 38.93 |
| 0.4129 | 16.0 | 2416 | 1.0643 | 0.1639 | 0.0391 | 0.1249 | 0.1239 | 35.73 |
| 0.2982 | 17.0 | 2567 | 1.0694 | 0.1435 | 0.0274 | 0.1088 | 0.1077 | 34.92 |
| 0.2982 | 18.0 | 2718 | 1.0714 | 0.1522 | 0.0327 | 0.1171 | 0.1165 | 35.32 |
| 0.2982 | 19.0 | 2869 | 1.0784 | 0.15 | 0.0311 | 0.1116 | 0.1114 | 36.03 |
| 0.2413 | 20.0 | 3020 | 1.0774 | 0.1615 | 0.0329 | 0.1253 | 0.1254 | 37.68 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.12.1+cu113
- Datasets 2.16.1
- Tokenizers 0.15.1
|
checkiejan/phi2-marking-checkpoint660 | checkiejan | 2024-01-29T13:45:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-29T13:44:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
shahzebnaveed/ppo-LunarLander-v2 | shahzebnaveed | 2024-01-29T13:40:18Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-01-29T13:14:20Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 251.35 +/- 28.69
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
hoanghoavienvo/roberta-base-detect-cheapfake-co1-co2 | hoanghoavienvo | 2024-01-29T13:35:44Z | 91 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-01-29T13:28:17Z | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: roberta-base-detect-cheapfake-co1-co2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-detect-cheapfake-co1-co2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3239
- Accuracy: 0.905
- F1: 0.9026
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 38 | 0.6857 | 0.52 | 0.6643 |
| No log | 2.0 | 76 | 0.5835 | 0.78 | 0.7284 |
| No log | 3.0 | 114 | 0.3515 | 0.87 | 0.8646 |
| No log | 4.0 | 152 | 0.3897 | 0.845 | 0.8517 |
| No log | 5.0 | 190 | 0.4177 | 0.845 | 0.8268 |
| No log | 6.0 | 228 | 0.3364 | 0.895 | 0.8889 |
| No log | 7.0 | 266 | 0.3319 | 0.89 | 0.8842 |
| No log | 8.0 | 304 | 0.3597 | 0.885 | 0.8770 |
| No log | 9.0 | 342 | 0.3205 | 0.91 | 0.9072 |
| No log | 10.0 | 380 | 0.3239 | 0.905 | 0.9026 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
christinacdl/XLM_RoBERTa-Multilingual-Opus-mt-Clickbait-Detection | christinacdl | 2024-01-29T13:34:58Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"en",
"el",
"it",
"pl",
"pt",
"ro",
"de",
"es",
"dataset:christinacdl/clickbait_detection_dataset",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-01-26T12:13:25Z | ---
license: mit
base_model: xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- recall
- precision
model-index:
- name: XLM_RoBERTa-Multilingual-Opus-mt-Clickbait-Detection
results: []
datasets:
- christinacdl/clickbait_detection_dataset
language:
- en
- el
- it
- pl
- pt
- ro
- de
- es
pipeline_tag: text-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLM_RoBERTa-Multilingual-Opus-mt-Clickbait-Detection
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2196
- Micro F1: 0.9764
- Macro F1: 0.9763
- Accuracy: 0.9764
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.36.1
- Pytorch 2.1.0+cu121
- Datasets 2.13.1
- Tokenizers 0.15.0 |
tarekziade/t5-small-booksum-sft-3-3 | tarekziade | 2024-01-29T13:30:08Z | 91 | 0 | transformers | [
"transformers",
"onnx",
"safetensors",
"t5",
"text2text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-01-29T13:21:45Z | ---
license: apache-2.0
---
This is a Shrinked and Fine-tuned version of https://huggingface.co/cnicu/t5-small-booksum
See https://ziade.org/2024/01/27/compressing-t5-models-for-summarization/ for the rationale.
Accuracy after shrinking:
rouge-1 Accuracy:
- F1 Accuracy: 95.99%
- Precision Accuracy: 85.37%
- Recall Accuracy: 95.73%
rouge-2 Accuracy:
- F1 Accuracy: 99.54%
- Precision Accuracy: 97.14%
- Recall Accuracy: 99.54%
rouge-l Accuracy:
- F1 Accuracy: 96.76%
- Precision Accuracy: 84.41%
- Recall Accuracy: 96.36%
- |
Yzh1998/photomaker | Yzh1998 | 2024-01-29T13:28:02Z | 0 | 0 | bertopic | [
"bertopic",
"text-to-image",
"aa",
"dataset:HuggingFaceM4/WebSight",
"license:apache-2.0",
"region:us"
] | text-to-image | 2024-01-29T13:22:25Z | ---
license: apache-2.0
datasets:
- HuggingFaceM4/WebSight
language:
- aa
metrics:
- bleu
library_name: bertopic
pipeline_tag: text-to-image
--- |
metercai/SimpleSDXL | metercai | 2024-01-29T13:27:25Z | 0 | 6 | diffusers | [
"diffusers",
"text-to-image",
"image-to-image",
"stable-diffusion",
"StableDiffusionXLPipeline",
"en",
"zh",
"license:apache-2.0",
"region:us"
] | text-to-image | 2023-11-20T16:08:02Z | ---
license: apache-2.0
language:
- en
- zh
library_name: diffusers
tags:
- text-to-image
- image-to-image
- stable-diffusion
- StableDiffusionXLPipeline
---
## Fist Screen UI / 首屏界面

Enhanced features base on Fooocus, can be seamless upgrading <br> 在Fooocus基础上增强功能,可无缝迁移升级
- **Chinese English mixed prompts**: Offline model without API dependencies and limitations. <br> **中英文混编提示词**: 离线模型,无API依赖和限制。
- **Finished image sets browsing**: Fooocus only can browse the current generated image set. <br> **已生成图片集浏览**: 原生版本仅限浏览当次生成的图片集。
- **View parameters and extract regeneration**: Can view the generation parameters of historical images, and can also extract parameters to backfill for secondary generation. <br> **查看生成参数和提取重生**: 可即时查看历史图片的生成参数,也可提取参数回填界面,二次生成。
- **Switch preset and theme online**: Fooocus requires modification of startup parameters to restart. <br> **在线切换预置包和背景色**: 原生版本需要修改启动参数重启。
## Install/安装方法
Synchronize all directories and files of this project to the `SimpleSDXL/models/` directory.
在SimpleSDXL目录下同步本项目所有目录和文件到models目录下
# 1,Download the application codes./下载应用代码库
git clone https://github.com/metercai/SimpleSDXL
# 2,Synchronize model files, need git-lfs module./安装模型文件,需要提前安装lfs模块
git clone https://huggingface.co/metercai/SimpleSDXL SimpleSDXL/models
For the usage of SimpleSDXL, refer to : https://github.com/metercai/SimpleSDXL
SimpleSDXL的具体使用方法,参见:https://github.com/metercai/SimpleSDXL
## Model List/模型清单
This model repository includes various models required for SDXL plotting, including:
本模型库包含了SDXL出图中需要的各种模型,具体包括:
Base基础模型:
- sd_xl_base_1.0_0.9vae.safetensors
- bluePencilXL_v050.safetensors
- juggernautXL_version6Rundiffusion.safetensors
- realisticStockPhoto_v10.safetensors
Refiner精炼模型:
- sd_xl_refiner_1.0_0.9vae.safetensors
- DreamShaper_8_pruned.safetensors
LoRa局部模型:
- sd_xl_offset_example-lora_1.0.safetensors
- SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors
- sdxl_lcm_lora.safetensors
Embeddings模型:
- unaestheticXLv31.safetensors
Inpaint修复模型:
- fooocus_inpaint_head.pth
- inpaint_v26.fooocus.patch
Controlnet模型:
- control-lora-canny-rank128.safetensors
- fooocus_xl_cpds_128.safetensors
- fooocus_ip_negative.safetensors
- ip-adapter-plus_sdxl_vit-h.bin
- ip-adapter-plus-face_sdxl_vit-h.bin
- detection_Resnet50_Final.pth
- parsing_parsenet.pth
Clip_vision模型:
- clip_vision_vit_h.safetensors
Upscale模型:
- fooocus_upscaler_s409985e5.bin
Prompt_expansion模型:
- fooocus_expansion
Translator翻译模型:
- nllb-200-distilled-600M
A total of 43G files, managed using git-lfs, including hidden directories, require 86G of local hard drive space.
总计43G文件,用git-lfs进行管理包含隐含目录,需占用本地86G硬盘空间。
|
Commandante/german-party-sentiment-bert-111-synonyms | Commandante | 2024-01-29T13:27:00Z | 91 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:mdraw/german-news-sentiment-bert",
"base_model:finetune:mdraw/german-news-sentiment-bert",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-01-29T12:40:15Z | ---
base_model: mdraw/german-news-sentiment-bert
tags:
- generated_from_trainer
model-index:
- name: german-party-sentiment-bert-111-synonyms
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# german-party-sentiment-bert-111-synonyms
This model is a fine-tuned version of [mdraw/german-news-sentiment-bert](https://huggingface.co/mdraw/german-news-sentiment-bert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0635
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 20
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 120
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2178 | 1.0 | 25 | 1.1090 |
| 1.1258 | 2.0 | 50 | 1.0695 |
| 1.1258 | 3.0 | 75 | 1.0635 |
| 1.0362 | 4.0 | 100 | 1.0968 |
| 1.0362 | 5.0 | 125 | 1.1286 |
| 0.9433 | 6.0 | 150 | 1.1412 |
| 0.9433 | 7.0 | 175 | 1.1431 |
### Framework versions
- Transformers 4.37.1
- Pytorch 2.1.2+cu118
- Tokenizers 0.15.1
|
jlbaker361/ft1000-30 | jlbaker361 | 2024-01-29T13:24:49Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2024-01-29T03:50:29Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-base
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - jlbaker361/ft1000-30
These are LoRA adaption weights for stabilityai/stable-diffusion-2-base. The weights were fine-tuned on the jlbaker361/wikiart-balanced1000 dataset.
Training epochs = 1
num_train_timesteps = 30
You can find some example images in the following.




|
AI-Sweden-Models/gpt-sw3-6.7b-v2 | AI-Sweden-Models | 2024-01-29T13:21:10Z | 4,755 | 3 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"da",
"sv",
"no",
"en",
"is",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-04-28T12:25:01Z | ---
license: other
language:
- da
- sv
- 'no'
- en
- is
---
# Model description
[AI Sweden](https://huggingface.co/AI-Sweden-Models/)
**Base models**
[GPT-Sw3 126M](https://huggingface.co/AI-Sweden-Models/gpt-sw3-126m/) | [GPT-Sw3 356M](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m/) | [GPT-Sw3 1.3B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b/)
[GPT-Sw3 6.7B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b/) | [GPT-Sw3 6.7B v2](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2/) | [GPT-Sw3 20B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b/)
[GPT-Sw3 40B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-40b/)
**Instruct models**
[GPT-Sw3 126M Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-126m-instruct/) | [GPT-Sw3 356M Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m-instruct/) | [GPT-Sw3 1.3B Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b-instruct/)
[GPT-Sw3 6.7B v2 Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct/) | [GPT-Sw3 20B Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b-instruct/)
**Quantized models**
[GPT-Sw3 6.7B v2 Instruct 4-bit gptq](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct-4bit-gptq) | [GPT-Sw3 20B Instruct 4-bit gptq](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b-instruct-4bit-gptq)
GPT-SW3 is a collection of large decoder-only pretrained transformer language models that were developed by AI Sweden in collaboration with RISE and the WASP WARA for Media and Language. GPT-SW3 has been trained on a dataset containing 320B tokens in Swedish, Norwegian, Danish, Icelandic, English, and programming code. The model was pretrained using a causal language modeling (CLM) objective utilizing the NeMo Megatron GPT implementation.
**V2**
This version of the 6.7 Billion model is trained with the same tokenizer as the other model sizes, but on a different data distribution (Much more English and Code) and for longer.
# Intended use
GPT-SW3 is an autoregressive large language model that is capable of generating coherent text in 5 different languages, and 4 programming languages. GPT-SW3 can also be instructed to perform text tasks that it has not been explicitly trained for, by casting them as text generation tasks.
# Limitations
Like other large language models for which the diversity (or lack thereof) of training data induces downstream impact on the quality of our model, GPT-SW3 has limitations in terms of for example bias and safety. GPT-SW3 can also have quality issues in terms of generation diversity and hallucination. By releasing with the modified RAIL license, we also hope to increase communication, transparency, and the study of large language models. The model may: overrepresent some viewpoints and underrepresent others, contain stereotypes, generate hateful, abusive, violent, discriminatory or prejudicial language. The model may make errors, including producing incorrect information as if it were factual, it may generate irrelevant or repetitive outputs, and content that may not be appropriate for all settings, including sexual content.
# How to use
To be able to access the model from Python, since this is a private repository, you have to log in with your access token. This can be done with `huggingface-cli login`, see [HuggingFace Quick Start Guide](https://huggingface.co/docs/huggingface_hub/quick-start#login) for more information.
The following code snippet loads our tokenizer & model, and uses the GPU if available.
```python
import torch
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
# Initialize Variables
model_name = "AI-Sweden-Models/gpt-sw3-6.7b-v2"
device = "cuda:0" if torch.cuda.is_available() else "cpu"
prompt = "Träd är fina för att"
# Initialize Tokenizer & Model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
model.eval()
model.to(device)
```
Generating text using the `generate` method is done as follows:
```python
input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(device)
generated_token_ids = model.generate(
inputs=input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.6,
top_p=1,
)[0]
generated_text = tokenizer.decode(generated_token_ids)
```
A convenient alternative to the `generate` method is the HuggingFace pipeline, which handles most of the work for you:
```python
generator = pipeline('text-generation', tokenizer=tokenizer, model=model, device=device)
generated = generator(prompt, max_new_tokens=100, do_sample=True, temperature=0.6, top_p=1)[0]["generated_text"]
```
# Compliance
The release of GPT-SW3 consists of model weights, a configuration file, a tokenizer file and a vocabulary file. None of these files contain any personally identifiable information (PII) or any copyrighted material.
# GPT-SW3 Model Card
Following Mitchell et al. (2018), we provide a model card for GPT-SW3.
# Model Details
- Person or organization developing model: GPT-SW3 was developed by AI Sweden in collaboration with RISE and the WASP WARA for Media and Language.
- Model date: GPT-SW3 date of release 2022-12-20
- Model version: This is the second generation of GPT-SW3.
- Model type: GPT-SW3 is a large decoder-only transformer language model.
- Information about training algorithms, parameters, fairness constraints or other applied approaches, and features: GPT-SW3 was trained with the NeMo Megatron GPT implementation.
- Paper or other resource for more information: N/A.
- License: [LICENSE](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2/blob/main/LICENSE).
- Where to send questions or comments about the model: [email protected]
# Intended Use
- Primary intended uses: We pre-release GPT-SW3 for research and evaluation of the capabilities of Large Language Models for the Nordic languages. This is an important step in the process of knowledge building for LLMs, validating the model and collecting feedback on both what works well and what does not.
- Primary intended users: Organizations and individuals in the Nordic NLP ecosystem who can contribute to the validation and testing of the models and provide feedback to the community.
- Out-of-scope use cases: See the modified RAIL license.
# Data, Limitations, and Recommendations
- Data selection for training: Training data for GPT-SW3 was selected based on a combination of breadth and availability. See our Datasheet for more detailed information on the data used to train our model.
- Data selection for evaluation: N/A
- Limitations: Like other large language models for which the diversity (or lack thereof) of training data induces downstream impact on the quality of our model, GPT-SW3 has limitations in terms of bias and safety. GPT-SW3 can also have quality issues in terms of generation diversity and hallucination. In general, GPT-SW3 is not immune from the plethora of issues that plague modern large language models. By releasing with the modified RAIL license, we also hope to increase communication, transparency, and the study of large language models. The model may: Overrepresent some viewpoints and underrepresent others. Contain stereotypes. Generate: Hateful, abusive, or violent language. Discriminatory or prejudicial language. Content that may not be appropriate for all settings, including sexual content. Make errors, including producing incorrect information as if it were factual. Generate irrelevant or repetitive outputs.
- Recommendations for future work: Indirect users should be made aware when the content they're working with is created by the LLM. Users should be aware of Risks and Limitations, and include an appropriate age disclaimer or blocking interface as necessary. Models pretrained with the LLM should include an updated Model Card. Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.
- We hope that the release of GPT-SW3, as well as information around our model training process, will increase open science around both large language models in specific and natural language processing and deep learning in general.
# GPT-SW3 Datasheet
- We follow the recommendations of Gebru et al. (2021) and provide a datasheet for the dataset used to train GPT-SW3.
# Motivation
- For what purpose was the dataset created? Was there a specific task in mind? Was there a specific gap that needed to be filled? Please provide a description. Pre-training of Large Language Models (LLM), such as GPT-3 (T. B. Brown et al., 2020), Gopher (J. W. Rae et al., 2022), BLOOM (T. L. Scao et al., 2022), etc. require 100s or even 1000s GBs of text data, with recent studies (Chinchilla: J. Hoffmann et al., 2022) suggesting that the scale of the training data is even more important than previously imagined. Therefore, in order to train Swedish LLMs, we needed a large scale Swedish dataset of high quality. Since no such datasets existed before this initiative, we collected data in the Nordic and English languages.
- Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)? The Strategic Initiative Natural Language Understanding at AI Sweden has established a new research environment in which collaboration is key. The core team working on the creation of the dataset is the NLU research group at AI Sweden. This group consists of researchers and developers from AI Sweden (Lindholmen Science Park AB) and RISE.
- Who funded the creation of the dataset? If there is an associated grant, please provide the name of the grantor and the grant name and number. The Swedish Innovation Agency (Vinnova) has funded this work across several different grants, including 2019-02996 and 2022-00949.
- Any other comments? No.
# Composition
- What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)? Are there multiple types of instances (e.g., movies, users, and ratings; people and interactions between them; nodes and edges)? Please provide a description. The instances are textual documents categorized by language and document type. The dataset is a filtered and deduplicated collection that includes the following sources:
- Books
- Litteraturbanken (https://litteraturbanken.se/)
- The Pile
- Articles
- Diva (https://www.diva-portal.org/)
- The Pile: PubMed
- The Pile: ArXiv
- Code
- Code Parrot: Github code (https://huggingface.co/datasets/codeparrot/github-code)
- Conversational
- Familjeliv (https://www.familjeliv.se/)
- Flashback (https://flashback.se/)
- Datasets collected through Parlai (see Appendix in data paper for complete list) (https://github.com/facebookresearch/ParlAI)
- Pushshift.io Reddit dataset, developed in Baumgartner et al. (2020) and processed in Roller et al. (2021)
- Math
- English Math dataset generated with code from DeepMind (D. Saxton et al., 2019)
- Swedish Math dataset, generated as above with manually translated templates
- Miscellaneous
- Summarization data (https://www.ida.liu.se/~arnjo82/papers/clarin-21-julius.pdf)
- OPUS, the open parallel corpus (https://opus.nlpl.eu/)
- Movie scripts (https://github.com/Aveek-Saha/Movie-Script-Database)
- Natural Instructions (https://github.com/allenai/natural-instructions)
- P3 (Public Pool of Prompts), (https://huggingface.co/datasets/bigscience/P3)
- The Norwegian Colossal Corpus (https://huggingface.co/datasets/NbAiLab/NCC)
- Danish Gigaword (https://gigaword.dk/)
- Icelandic Gigaword (https://clarin.is/en/resources/gigaword/)
- The Pile: Stack Exchange
- Web Common Crawl
- Web data from the project LES (Linguistic Explorations of Societies, https://les.gu.se).
- Multilingual C4 (MC4), prepared by AllenAI from C4 (C. Raffel et al., 2019)
- Open Super-large Crawled Aggregated coRpus (OSCAR) (P. O. Suarez, 2019)
- The Pile: Open Web Text
- Web Sources
- Various public Swedish website scrapes (see Appendix in data paper)
- Familjeliv Articles
- Public Swedish Job Ads from JobTech/Arbetsförmedlingen
- Wikipedia
- Official Wikipedia dumps
- How many instances are there in total (of each type, if appropriate)? The training data consists of 1.1TB UTF-8 encoded text, containing 660M documents with a total of 320B tokens.
- Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? If the dataset is a sample, then what is the larger set? Is the sample representative of the larger set (e.g., geographic coverage)? If so, please describe how this representativeness was validated/verified. If it is not representative of the larger set, please describe why not (e.g., to cover a more diverse range of instances, because instances were withheld or unavailable). The subset of our dataset that comes from multilingual Common Crawl datasets (MC4, Oscar), are filtered by language to only include Swedish, Norwegian, Danish, and Icelandic. From The Pile, we included only the parts that typically are of highest textual quality or complemented the rest of our dataset with sources we otherwise lacked (e.g. books). The remainder of the dataset was collected from the above sources.
- What data does each instance consist of? “Raw” data (e.g., unprocessed text or images) or features? In either case, please provide a description. Each instance consists of raw text data.
- Is there a label or target associated with each instance? If so, please provide a description. No.
- Is any information missing from individual instances? If so, please provide a description, explaining why this information is missing (e.g., because it was unavailable). This does not include intentionally removed information, but might include, e.g., redacted text. No.
- Are relationships between individual instances made explicit (e.g., users’ movie ratings, social network links)? If so, please describe how these relationships are made explicit. There are no explicit relationships between individual instances.
- Are there recommended data splits (e.g., training, development/validation, testing)? If so, please provide a description of these splits, explaining the rationale behind them. There are no explicit splits recommended for this dataset. When pre-training the model, a random split for train, dev, test is set to 99.99%, 0.08%, 0.02% respectively, and is sampled proportionally to each subset’s weight and size. The weight of each subset was manually decided beforehand. These decisions were made considering the data’s value, source, and language, to form a representative and balanced pre-training corpus.
- Are there any errors, sources of noise, or redundancies in the dataset? If so, please provide a description. The dataset is a collection of many sources, some of which naturally contain some overlap. Although we have performed deduplication, some overlap may still remain. Furthermore, there may be some noise remaining from artifacts originating in Common Crawl datasets, that have been missed by our data filtering process. Except for these, we are not aware of any errors, sources of noise, or redundancies.
- Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)? The dataset is self-contained.
- Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? If so, please describe why. The dataset contains subsets of public Common Crawl, Reddit, Familjeliv and Flashback. These could contain sentences that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety.
- Does the dataset relate to people? If not, you may skip the remaining questions in this section. Some documents of this data relate to people, such as news articles, Wikipedia descriptions, etc.
- Does the dataset identify any subpopulations (e.g., by age, gender)? If so, please describe how these subpopulations are identified and provide a description of their respective distributions within the dataset. No, the dataset does not explicitly include subpopulation identification.
- Any other comments? No.
# Collection Process
- How was the data associated with each instance acquired? Was the data directly observable (e.g., raw text, movie ratings), reported by subjects (e.g., survey responses), or indirectly inferred/derived from other data (e.g., part-of-speech tags, model-based guesses for age or language)? If data was reported by subjects or indirectly inferred/derived from other data, was the data validated/verified? If so, please describe how. N/A. The dataset is a union of publicly available datasets and sources.
- What mechanisms or procedures were used to collect the data (e.g., hardware apparatus or sensor, manual human curation, software program, software API)? How were these mechanisms or procedures validated? The data was downloaded from the internet.
- If the dataset is a sample from a larger set, what was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)? Please see previous answers for how parts of the dataset were selected.
- Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)? This data is mined, filtered and sampled by machines.
- Over what timeframe was the data collected? Does this timeframe match the creation timeframe of the data associated with the instances (e.g., recent crawl of old news articles)? If not, please describe the timeframe in which the data associated with the instances was created. The dataset was collected during the period June 2021 to June 2022. The creation of the collected sources varies, with e.g. Common Crawl data that have been continuously collected over 12 years.
- Does the dataset relate to people? If not, you may skip the remainder of the questions in this section. Yes. The texts have been produced by people. Any personal information potentially present in publicly available data sources and thus in the created dataset is of no interest to the collection and use of the dataset.
- Has an analysis of the potential impact of the dataset and its use on data subjects (e.g., a data protection impact analysis) been conducted? If so, please provide a description of this analysis, including the outcomes, as well as a link or other access point to any supporting documentation. Yes.
- Any other comments? No.
- Preprocessing/cleaning/labeling
- Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)? If so, please provide a description. If not, you may skip the remainder of the questions in this section. The dataset was filtered and re-formatted on a document-level using standard procedures, inspired by the work in The BigScience ROOTS Corpus (H. Laurençon et al., 2022) and Gopher (J. W. Rae et al., 2022). This was done with the goal of achieving a consistent text format throughout the dataset, and to remove documents that did not meet our textual quality requirements (e.g. repetitiveness). Furthermore, the dataset was deduplicated to remedy the overlap between collected subsets using the MinHash algorithm, similar to the method used in GPT-3 and The Pile, and described in greater detail in “Deduplicating Training Data Makes Language Models Better” (K. Lee et al., 2021).
- Was the “raw” data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)? If so, please provide a link or other access point to the “raw” data. The “raw” component datasets are publicly available in their respective locations.
- Any other comments? No.
# Uses
- Has the dataset been used for any tasks already? If so, please provide a description. The dataset was used to pre-train the GPT-SW3 models.
- Is there a repository that links to any or all papers or systems that use the dataset? If so, please provide a link or other access point. N/A.
- What (other) tasks could the dataset be used for? The data can be used to pre-train language models, which are foundations for many current and future language tasks.
- Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses? For example, is there anything that a future user might need to know to avoid uses that could result in unfair treatment of individuals or groups (e.g., stereotyping, quality of service issues) or other undesirable harms (e.g., financial harms, legal risks) If so, please provide a description. Is there anything a future user could do to mitigate these undesirable harms? The dataset is probably quite representative of Swedish internet discourse in general, and of the Swedish public sector, but we know that this data does not necessarily reflect the entire Swedish population.
- Are there tasks for which the dataset should not be used? If so, please provide a description. None that we are currently aware of.
- Any other comments? No.
# Distribution
- Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created? If so, please provide a description. No.
- How will the dataset distributed (e.g., tarball on website, API, GitHub)? Does the dataset have a digital object identifier (DOI)? N/A.
- When will the dataset be distributed? N/A.
- Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)? If so, please describe this license and/or ToU, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms or ToU, as well as any fees associated with these restrictions. N/A.
- Do any export controls or other regulatory restrictions apply to the dataset or to individual instances? If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any supporting documentation. N/A.
- Any other comments? No.
# Maintenance
- Who is supporting/hosting/maintaining the dataset? AI Sweden at Lindholmen Science Park AB.
- How can the owner/curator/manager of the dataset be contacted (e.g., email address)? [email protected]
- Is there an erratum? If so, please provide a link or other access point. N/A.
- Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances)? If so, please describe how often, by whom, and how updates will be communicated to users (e.g., mailing list, GitHub)? Currently, there are no plans for updating the dataset.
- If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g., were individuals in question told that their data would be retained for a fixed period of time and then deleted)? If so, please describe these limits and explain how they will be enforced. Read the privacy policy for the NLU initiative at AI Sweden [here](https://www.ai.se/en/privacy-policy-nlu).
- Will older versions of the dataset continue to be supported/hosted/maintained? If so, please describe how. If not, please describe how its obsolescence will be communicated to users. N/A.
- If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so? If so, please provide a description. Will these contributions be validated/ verified? If so, please describe how. If not, why not? Is there a process for communicating/ distributing these contributions to other users? If so, please provide a description. Not at this time.
- Any other comments? No.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_AI-Sweden-Models__gpt-sw3-6.7b-v2)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 34.74 |
| ARC (25-shot) | 39.42 |
| HellaSwag (10-shot) | 66.39 |
| MMLU (5-shot) | 30.09 |
| TruthfulQA (0-shot) | 35.6 |
| Winogrande (5-shot) | 64.25 |
| GSM8K (5-shot) | 1.21 |
| DROP (3-shot) | 6.22 |
|
AI-Sweden-Models/gpt-sw3-6.7b | AI-Sweden-Models | 2024-01-29T13:20:53Z | 1,871 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"en",
"sv",
"no",
"da",
"is",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-12-14T12:33:22Z | ---
license: other
language:
- en
- sv
- 'no'
- da
- is
---
# Model description
[AI Sweden](https://huggingface.co/AI-Sweden-Models/)
**Base models**
[GPT-Sw3 126M](https://huggingface.co/AI-Sweden-Models/gpt-sw3-126m/) | [GPT-Sw3 356M](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m/) | [GPT-Sw3 1.3B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b/)
[GPT-Sw3 6.7B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b/) | [GPT-Sw3 6.7B v2](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2/) | [GPT-Sw3 20B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b/)
[GPT-Sw3 40B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-40b/)
**Instruct models**
[GPT-Sw3 126M Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-126m-instruct/) | [GPT-Sw3 356M Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m-instruct/) | [GPT-Sw3 1.3B Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b-instruct/)
[GPT-Sw3 6.7B v2 Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct/) | [GPT-Sw3 20B Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b-instruct/)
**Quantized models**
[GPT-Sw3 6.7B v2 Instruct 4-bit gptq](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct-4bit-gptq) | [GPT-Sw3 20B Instruct 4-bit gptq](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b-instruct-4bit-gptq)
GPT-SW3 is a collection of large decoder-only pretrained transformer language models that were developed by AI Sweden in collaboration with RISE and the WASP WARA for Media and Language. GPT-SW3 has been trained on a dataset containing 320B tokens in Swedish, Norwegian, Danish, Icelandic, English, and programming code. The model was pretrained using a causal language modeling (CLM) objective utilizing the NeMo Megatron GPT implementation.
# Intended use
GPT-SW3 is an autoregressive large language model that is capable of generating coherent text in 5 different languages, and 4 programming languages. GPT-SW3 can also be instructed to perform text tasks that it has not been explicitly trained for, by casting them as text generation tasks.
# Limitations
Like other large language models for which the diversity (or lack thereof) of training data induces downstream impact on the quality of our model, GPT-SW3 has limitations in terms of for example bias and safety. GPT-SW3 can also have quality issues in terms of generation diversity and hallucination. By releasing with the modified RAIL license, we also hope to increase communication, transparency, and the study of large language models. The model may: overrepresent some viewpoints and underrepresent others, contain stereotypes, generate hateful, abusive, violent, discriminatory or prejudicial language. The model may make errors, including producing incorrect information as if it were factual, it may generate irrelevant or repetitive outputs, and content that may not be appropriate for all settings, including sexual content.
# How to use
To be able to access the model from Python, since this is a private repository, you have to log in with your access token. This can be done with `huggingface-cli login`, see [HuggingFace Quick Start Guide](https://huggingface.co/docs/huggingface_hub/quick-start#login) for more information.
The following code snippet loads our tokenizer & model, and uses the GPU if available.
```python
import torch
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
# Initialize Variables
model_name = "AI-Sweden-Models/gpt-sw3-6.7b"
device = "cuda:0" if torch.cuda.is_available() else "cpu"
prompt = "Träd är fina för att"
# Initialize Tokenizer & Model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
model.eval()
model.to(device)
```
Generating text using the `generate` method is done as follows:
```python
input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(device)
generated_token_ids = model.generate(
inputs=input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.6,
top_p=1,
)[0]
generated_text = tokenizer.decode(generated_token_ids)
```
A convenient alternative to the `generate` method is the HuggingFace pipeline, which handles most of the work for you:
```python
generator = pipeline('text-generation', tokenizer=tokenizer, model=model, device=device)
generated = generator(prompt, max_new_tokens=100, do_sample=True, temperature=0.6, top_p=1)[0]["generated_text"]
```
# Compliance
The release of GPT-SW3 consists of model weights, a configuration file, a tokenizer file and a vocabulary file. None of these files contain any personally identifiable information (PII) or any copyrighted material.
# GPT-SW3 Model Card
Following Mitchell et al. (2018), we provide a model card for GPT-SW3.
# Model Details
- Person or organization developing model: GPT-SW3 was developed by AI Sweden in collaboration with RISE and the WASP WARA for Media and Language.
- Model date: GPT-SW3 date of release 2022-12-20
- Model version: This is the second generation of GPT-SW3.
- Model type: GPT-SW3 is a large decoder-only transformer language model.
- Information about training algorithms, parameters, fairness constraints or other applied approaches, and features: GPT-SW3 was trained with the NeMo Megatron GPT implementation.
- Paper or other resource for more information: N/A.
- License: [LICENSE](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b/blob/main/LICENSE).
- Where to send questions or comments about the model: [email protected]
# Intended Use
- Primary intended uses: We pre-release GPT-SW3 for research and evaluation of the capabilities of Large Language Models for the Nordic languages. This is an important step in the process of knowledge building for LLMs, validating the model and collecting feedback on both what works well and what does not.
- Primary intended users: Organizations and individuals in the Nordic NLP ecosystem who can contribute to the validation and testing of the models and provide feedback to the community.
- Out-of-scope use cases: See the modified RAIL license.
# Data, Limitations, and Recommendations
- Data selection for training: Training data for GPT-SW3 was selected based on a combination of breadth and availability. See our Datasheet for more detailed information on the data used to train our model.
- Data selection for evaluation: N/A
- Limitations: Like other large language models for which the diversity (or lack thereof) of training data induces downstream impact on the quality of our model, GPT-SW3 has limitations in terms of bias and safety. GPT-SW3 can also have quality issues in terms of generation diversity and hallucination. In general, GPT-SW3 is not immune from the plethora of issues that plague modern large language models. By releasing with the modified RAIL license, we also hope to increase communication, transparency, and the study of large language models. The model may: Overrepresent some viewpoints and underrepresent others. Contain stereotypes. Generate: Hateful, abusive, or violent language. Discriminatory or prejudicial language. Content that may not be appropriate for all settings, including sexual content. Make errors, including producing incorrect information as if it were factual. Generate irrelevant or repetitive outputs.
- Recommendations for future work: Indirect users should be made aware when the content they're working with is created by the LLM. Users should be aware of Risks and Limitations, and include an appropriate age disclaimer or blocking interface as necessary. Models pretrained with the LLM should include an updated Model Card. Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.
- We hope that the release of GPT-SW3, as well as information around our model training process, will increase open science around both large language models in specific and natural language processing and deep learning in general.
# GPT-SW3 Datasheet
- We follow the recommendations of Gebru et al. (2021) and provide a datasheet for the dataset used to train GPT-SW3.
# Motivation
- For what purpose was the dataset created? Was there a specific task in mind? Was there a specific gap that needed to be filled? Please provide a description. Pre-training of Large Language Models (LLM), such as GPT-3 (T. B. Brown et al., 2020), Gopher (J. W. Rae et al., 2022), BLOOM (T. L. Scao et al., 2022), etc. require 100s or even 1000s GBs of text data, with recent studies (Chinchilla: J. Hoffmann et al., 2022) suggesting that the scale of the training data is even more important than previously imagined. Therefore, in order to train Swedish LLMs, we needed a large scale Swedish dataset of high quality. Since no such datasets existed before this initiative, we collected data in the Nordic and English languages.
- Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)? The Strategic Initiative Natural Language Understanding at AI Sweden has established a new research environment in which collaboration is key. The core team working on the creation of the dataset is the NLU research group at AI Sweden. This group consists of researchers and developers from AI Sweden (Lindholmen Science Park AB) and RISE.
- Who funded the creation of the dataset? If there is an associated grant, please provide the name of the grantor and the grant name and number. The Swedish Innovation Agency (Vinnova) has funded this work across several different grants, including 2019-02996 and 2022-00949.
- Any other comments? No.
# Composition
- What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)? Are there multiple types of instances (e.g., movies, users, and ratings; people and interactions between them; nodes and edges)? Please provide a description. The instances are textual documents categorized by language and document type. The dataset is a filtered and deduplicated collection that includes the following sources:
- Books
- Litteraturbanken (https://litteraturbanken.se/)
- The Pile
- Articles
- Diva (https://www.diva-portal.org/)
- The Pile: PubMed
- The Pile: ArXiv
- Code
- Code Parrot: Github code (https://huggingface.co/datasets/codeparrot/github-code)
- Conversational
- Familjeliv (https://www.familjeliv.se/)
- Flashback (https://flashback.se/)
- Datasets collected through Parlai (see Appendix in data paper for complete list) (https://github.com/facebookresearch/ParlAI)
- Pushshift.io Reddit dataset, developed in Baumgartner et al. (2020) and processed in Roller et al. (2021)
- Math
- English Math dataset generated with code from DeepMind (D. Saxton et al., 2019)
- Swedish Math dataset, generated as above with manually translated templates
- Miscellaneous
- Summarization data (https://www.ida.liu.se/~arnjo82/papers/clarin-21-julius.pdf)
- OPUS, the open parallel corpus (https://opus.nlpl.eu/)
- Movie scripts (https://github.com/Aveek-Saha/Movie-Script-Database)
- Natural Instructions (https://github.com/allenai/natural-instructions)
- P3 (Public Pool of Prompts), (https://huggingface.co/datasets/bigscience/P3)
- The Norwegian Colossal Corpus (https://huggingface.co/datasets/NbAiLab/NCC)
- Danish Gigaword (https://gigaword.dk/)
- Icelandic Gigaword (https://clarin.is/en/resources/gigaword/)
- The Pile: Stack Exchange
- Web Common Crawl
- Web data from the project LES (Linguistic Explorations of Societies, https://les.gu.se).
- Multilingual C4 (MC4), prepared by AllenAI from C4 (C. Raffel et al., 2019)
- Open Super-large Crawled Aggregated coRpus (OSCAR) (P. O. Suarez, 2019)
- The Pile: Open Web Text
- Web Sources
- Various public Swedish website scrapes (see Appendix in data paper)
- Familjeliv Articles
- Public Swedish Job Ads from JobTech/Arbetsförmedlingen
- Wikipedia
- Official Wikipedia dumps
- How many instances are there in total (of each type, if appropriate)? The training data consists of 1.1TB UTF-8 encoded text, containing 660M documents with a total of 320B tokens.
- Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? If the dataset is a sample, then what is the larger set? Is the sample representative of the larger set (e.g., geographic coverage)? If so, please describe how this representativeness was validated/verified. If it is not representative of the larger set, please describe why not (e.g., to cover a more diverse range of instances, because instances were withheld or unavailable). The subset of our dataset that comes from multilingual Common Crawl datasets (MC4, Oscar), are filtered by language to only include Swedish, Norwegian, Danish, and Icelandic. From The Pile, we included only the parts that typically are of highest textual quality or complemented the rest of our dataset with sources we otherwise lacked (e.g. books). The remainder of the dataset was collected from the above sources.
- What data does each instance consist of? “Raw” data (e.g., unprocessed text or images) or features? In either case, please provide a description. Each instance consists of raw text data.
- Is there a label or target associated with each instance? If so, please provide a description. No.
- Is any information missing from individual instances? If so, please provide a description, explaining why this information is missing (e.g., because it was unavailable). This does not include intentionally removed information, but might include, e.g., redacted text. No.
- Are relationships between individual instances made explicit (e.g., users’ movie ratings, social network links)? If so, please describe how these relationships are made explicit. There are no explicit relationships between individual instances.
- Are there recommended data splits (e.g., training, development/validation, testing)? If so, please provide a description of these splits, explaining the rationale behind them. There are no explicit splits recommended for this dataset. When pre-training the model, a random split for train, dev, test is set to 99.99%, 0.08%, 0.02% respectively, and is sampled proportionally to each subset’s weight and size. The weight of each subset was manually decided beforehand. These decisions were made considering the data’s value, source, and language, to form a representative and balanced pre-training corpus.
- Are there any errors, sources of noise, or redundancies in the dataset? If so, please provide a description. The dataset is a collection of many sources, some of which naturally contain some overlap. Although we have performed deduplication, some overlap may still remain. Furthermore, there may be some noise remaining from artifacts originating in Common Crawl datasets, that have been missed by our data filtering process. Except for these, we are not aware of any errors, sources of noise, or redundancies.
- Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)? The dataset is self-contained.
- Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? If so, please describe why. The dataset contains subsets of public Common Crawl, Reddit, Familjeliv and Flashback. These could contain sentences that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety.
- Does the dataset relate to people? If not, you may skip the remaining questions in this section. Some documents of this data relate to people, such as news articles, Wikipedia descriptions, etc.
- Does the dataset identify any subpopulations (e.g., by age, gender)? If so, please describe how these subpopulations are identified and provide a description of their respective distributions within the dataset. No, the dataset does not explicitly include subpopulation identification.
- Any other comments? No.
# Collection Process
- How was the data associated with each instance acquired? Was the data directly observable (e.g., raw text, movie ratings), reported by subjects (e.g., survey responses), or indirectly inferred/derived from other data (e.g., part-of-speech tags, model-based guesses for age or language)? If data was reported by subjects or indirectly inferred/derived from other data, was the data validated/verified? If so, please describe how. N/A. The dataset is a union of publicly available datasets and sources.
- What mechanisms or procedures were used to collect the data (e.g., hardware apparatus or sensor, manual human curation, software program, software API)? How were these mechanisms or procedures validated? The data was downloaded from the internet.
- If the dataset is a sample from a larger set, what was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)? Please see previous answers for how parts of the dataset were selected.
- Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)? This data is mined, filtered and sampled by machines.
- Over what timeframe was the data collected? Does this timeframe match the creation timeframe of the data associated with the instances (e.g., recent crawl of old news articles)? If not, please describe the timeframe in which the data associated with the instances was created. The dataset was collected during the period June 2021 to June 2022. The creation of the collected sources varies, with e.g. Common Crawl data that have been continuously collected over 12 years.
- Does the dataset relate to people? If not, you may skip the remainder of the questions in this section. Yes. The texts have been produced by people. Any personal information potentially present in publicly available data sources and thus in the created dataset is of no interest to the collection and use of the dataset.
- Has an analysis of the potential impact of the dataset and its use on data subjects (e.g., a data protection impact analysis) been conducted? If so, please provide a description of this analysis, including the outcomes, as well as a link or other access point to any supporting documentation. Yes.
- Any other comments? No.
- Preprocessing/cleaning/labeling
- Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)? If so, please provide a description. If not, you may skip the remainder of the questions in this section. The dataset was filtered and re-formatted on a document-level using standard procedures, inspired by the work in The BigScience ROOTS Corpus (H. Laurençon et al., 2022) and Gopher (J. W. Rae et al., 2022). This was done with the goal of achieving a consistent text format throughout the dataset, and to remove documents that did not meet our textual quality requirements (e.g. repetitiveness). Furthermore, the dataset was deduplicated to remedy the overlap between collected subsets using the MinHash algorithm, similar to the method used in GPT-3 and The Pile, and described in greater detail in “Deduplicating Training Data Makes Language Models Better” (K. Lee et al., 2021).
- Was the “raw” data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)? If so, please provide a link or other access point to the “raw” data. The “raw” component datasets are publicly available in their respective locations.
- Any other comments? No.
# Uses
- Has the dataset been used for any tasks already? If so, please provide a description. The dataset was used to pre-train the GPT-SW3 models.
- Is there a repository that links to any or all papers or systems that use the dataset? If so, please provide a link or other access point. N/A.
- What (other) tasks could the dataset be used for? The data can be used to pre-train language models, which are foundations for many current and future language tasks.
- Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses? For example, is there anything that a future user might need to know to avoid uses that could result in unfair treatment of individuals or groups (e.g., stereotyping, quality of service issues) or other undesirable harms (e.g., financial harms, legal risks) If so, please provide a description. Is there anything a future user could do to mitigate these undesirable harms? The dataset is probably quite representative of Swedish internet discourse in general, and of the Swedish public sector, but we know that this data does not necessarily reflect the entire Swedish population.
- Are there tasks for which the dataset should not be used? If so, please provide a description. None that we are currently aware of.
- Any other comments? No.
# Distribution
- Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created? If so, please provide a description. No.
- How will the dataset distributed (e.g., tarball on website, API, GitHub)? Does the dataset have a digital object identifier (DOI)? N/A.
- When will the dataset be distributed? N/A.
- Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)? If so, please describe this license and/or ToU, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms or ToU, as well as any fees associated with these restrictions. N/A.
- Do any export controls or other regulatory restrictions apply to the dataset or to individual instances? If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any supporting documentation. N/A.
- Any other comments? No.
# Maintenance
- Who is supporting/hosting/maintaining the dataset? AI Sweden at Lindholmen Science Park AB.
- How can the owner/curator/manager of the dataset be contacted (e.g., email address)? [email protected]
- Is there an erratum? If so, please provide a link or other access point. N/A.
- Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances)? If so, please describe how often, by whom, and how updates will be communicated to users (e.g., mailing list, GitHub)? Currently, there are no plans for updating the dataset.
- If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g., were individuals in question told that their data would be retained for a fixed period of time and then deleted)? If so, please describe these limits and explain how they will be enforced. Read the privacy policy for the NLU initiative at AI Sweden [here](https://www.ai.se/en/privacy-policy-nlu).
- Will older versions of the dataset continue to be supported/hosted/maintained? If so, please describe how. If not, please describe how its obsolescence will be communicated to users. N/A.
- If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so? If so, please provide a description. Will these contributions be validated/ verified? If so, please describe how. If not, why not? Is there a process for communicating/ distributing these contributions to other users? If so, please provide a description. Not at this time.
- Any other comments? No.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_AI-Sweden-Models__gpt-sw3-6.7b)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 33.18 |
| ARC (25-shot) | 36.35 |
| HellaSwag (10-shot) | 60.75 |
| MMLU (5-shot) | 26.0 |
| TruthfulQA (0-shot) | 39.04 |
| Winogrande (5-shot) | 60.69 |
| GSM8K (5-shot) | 0.53 |
| DROP (3-shot) | 8.92 |
|
AI-Sweden-Models/gpt-sw3-1.3b | AI-Sweden-Models | 2024-01-29T13:20:38Z | 4,122 | 4 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"da",
"sv",
"no",
"en",
"is",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-12-14T12:33:00Z | ---
license: apache-2.0
language:
- da
- sv
- 'no'
- en
- is
---
# Model description
[AI Sweden](https://huggingface.co/AI-Sweden-Models/)
**Base models**
[GPT-Sw3 126M](https://huggingface.co/AI-Sweden-Models/gpt-sw3-126m/) | [GPT-Sw3 356M](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m/) | [GPT-Sw3 1.3B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b/)
[GPT-Sw3 6.7B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b/) | [GPT-Sw3 6.7B v2](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2/) | [GPT-Sw3 20B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b/)
[GPT-Sw3 40B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-40b/)
**Instruct models**
[GPT-Sw3 126M Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-126m-instruct/) | [GPT-Sw3 356M Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m-instruct/) | [GPT-Sw3 1.3B Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b-instruct/)
[GPT-Sw3 6.7B v2 Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct/) | [GPT-Sw3 20B Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b-instruct/)
**Quantized models**
[GPT-Sw3 6.7B v2 Instruct 4-bit gptq](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct-4bit-gptq) | [GPT-Sw3 20B Instruct 4-bit gptq](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b-instruct-4bit-gptq)
GPT-SW3 is a collection of large decoder-only pretrained transformer language models that were developed by AI Sweden in collaboration with RISE and the WASP WARA for Media and Language. GPT-SW3 has been trained on a dataset containing 320B tokens in Swedish, Norwegian, Danish, Icelandic, English, and programming code. The model was pretrained using a causal language modeling (CLM) objective utilizing the NeMo Megatron GPT implementation.
# Intended use
GPT-SW3 is an autoregressive large language model that is capable of generating coherent text in 5 different languages, and 4 programming languages. GPT-SW3 can also be instructed to perform text tasks that it has not been explicitly trained for, by casting them as text generation tasks. AI Sweden shares GPT-SW3 in a controlled pre-release with organizations and individuals in the Nordic NLP ecosystem who can contribute to the validation and testing of the models and provide feedback to the community. This is an important step in the process of validating the model and collecting feedback on both what works well and what does not.
# Limitations
Like other large language models for which the diversity (or lack thereof) of training data induces downstream impact on the quality of our model, GPT-SW3 has limitations in terms of for example bias and safety. GPT-SW3 can also have quality issues in terms of generation diversity and hallucination. By releasing with the modified RAIL license, we also hope to increase communication, transparency, and the study of large language models. The model may: overrepresent some viewpoints and underrepresent others, contain stereotypes, generate hateful, abusive, violent, discriminatory or prejudicial language. The model may make errors, including producing incorrect information as if it were factual, it may generate irrelevant or repetitive outputs, and content that may not be appropriate for all settings, including sexual content.
# How to use
To be able to access the model from Python, since this is a private repository, you have to log in with your access token. This can be done with `huggingface-cli login`, see [HuggingFace Quick Start Guide](https://huggingface.co/docs/huggingface_hub/quick-start#login) for more information.
The following code snippet loads our tokenizer & model, and uses the GPU if available.
```python
import torch
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
# Initialize Variables
model_name = "AI-Sweden-Models/gpt-sw3-1.3b"
device = "cuda:0" if torch.cuda.is_available() else "cpu"
prompt = "Träd är fina för att"
# Initialize Tokenizer & Model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
model.eval()
model.to(device)
```
Generating text using the `generate` method is done as follows:
```python
input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(device)
generated_token_ids = model.generate(
inputs=input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.6,
top_p=1,
)[0]
generated_text = tokenizer.decode(generated_token_ids)
```
A convenient alternative to the `generate` method is the HuggingFace pipeline, which handles most of the work for you:
```python
generator = pipeline('text-generation', tokenizer=tokenizer, model=model, device=device)
generated = generator(prompt, max_new_tokens=100, do_sample=True, temperature=0.6, top_p=1)[0]["generated_text"]
```
# Compliance
The release of GPT-SW3 consists of model weights, a configuration file, a tokenizer file and a vocabulary file. None of these files contain any personally identifiable information (PII) or any copyrighted material.
# GPT-SW3 Model Card
Following Mitchell et al. (2018), we provide a model card for GPT-SW3.
# Model Details
- Person or organization developing model: GPT-SW3 was developed by AI Sweden in collaboration with RISE and the WASP WARA for Media and Language.
- Model date: GPT-SW3 date of release 2022-12-20
- Model version: This is the second generation of GPT-SW3.
- Model type: GPT-SW3 is a large decoder-only transformer language model.
- Information about training algorithms, parameters, fairness constraints or other applied approaches, and features: GPT-SW3 was trained with the NeMo Megatron GPT implementation.
- Paper or other resource for more information: N/A.
- License: [LICENSE](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b/blob/main/LICENSE).
- Where to send questions or comments about the model: [email protected]
# Intended Use
- Primary intended uses: We pre-release GPT-SW3 for research and evaluation of the capabilities of Large Language Models for the Nordic languages. This is an important step in the process of knowledge building for LLMs, validating the model and collecting feedback on both what works well and what does not.
- Primary intended users: Organizations and individuals in the Nordic NLP ecosystem who can contribute to the validation and testing of the models and provide feedback to the community.
- Out-of-scope use cases: See the modified RAIL license.
# Data, Limitations, and Recommendations
- Data selection for training: Training data for GPT-SW3 was selected based on a combination of breadth and availability. See our Datasheet for more detailed information on the data used to train our model.
- Data selection for evaluation: N/A
- Limitations: Like other large language models for which the diversity (or lack thereof) of training data induces downstream impact on the quality of our model, GPT-SW3 has limitations in terms of bias and safety. GPT-SW3 can also have quality issues in terms of generation diversity and hallucination. In general, GPT-SW3 is not immune from the plethora of issues that plague modern large language models. By releasing with the modified RAIL license, we also hope to increase communication, transparency, and the study of large language models. The model may: Overrepresent some viewpoints and underrepresent others. Contain stereotypes. Generate: Hateful, abusive, or violent language. Discriminatory or prejudicial language. Content that may not be appropriate for all settings, including sexual content. Make errors, including producing incorrect information as if it were factual. Generate irrelevant or repetitive outputs.
- Recommendations for future work: Indirect users should be made aware when the content they're working with is created by the LLM. Users should be aware of Risks and Limitations, and include an appropriate age disclaimer or blocking interface as necessary. Models pretrained with the LLM should include an updated Model Card. Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.
- We hope that the release of GPT-SW3, as well as information around our model training process, will increase open science around both large language models in specific and natural language processing and deep learning in general.
# GPT-SW3 Datasheet
- We follow the recommendations of Gebru et al. (2021) and provide a datasheet for the dataset used to train GPT-SW3.
# Motivation
- For what purpose was the dataset created? Was there a specific task in mind? Was there a specific gap that needed to be filled? Please provide a description. Pre-training of Large Language Models (LLM), such as GPT-3 (T. B. Brown et al., 2020), Gopher (J. W. Rae et al., 2022), BLOOM (T. L. Scao et al., 2022), etc. require 100s or even 1000s GBs of text data, with recent studies (Chinchilla: J. Hoffmann et al., 2022) suggesting that the scale of the training data is even more important than previously imagined. Therefore, in order to train Swedish LLMs, we needed a large scale Swedish dataset of high quality. Since no such datasets existed before this initiative, we collected data in the Nordic and English languages.
- Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)? The Strategic Initiative Natural Language Understanding at AI Sweden has established a new research environment in which collaboration is key. The core team working on the creation of the dataset is the NLU research group at AI Sweden. This group consists of researchers and developers from AI Sweden (Lindholmen Science Park AB) and RISE.
- Who funded the creation of the dataset? If there is an associated grant, please provide the name of the grantor and the grant name and number. The Swedish Innovation Agency (Vinnova) has funded this work across several different grants, including 2019-02996 and 2022-00949.
- Any other comments? No.
# Composition
- What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)? Are there multiple types of instances (e.g., movies, users, and ratings; people and interactions between them; nodes and edges)? Please provide a description. The instances are textual documents categorized by language and document type. The dataset is a filtered and deduplicated collection that includes the following sources:
- Books
- Litteraturbanken (https://litteraturbanken.se/)
- The Pile
- Articles
- Diva (https://www.diva-portal.org/)
- The Pile: PubMed
- The Pile: ArXiv
- Code
- Code Parrot: Github code (https://huggingface.co/datasets/codeparrot/github-code)
- Conversational
- Familjeliv (https://www.familjeliv.se/)
- Flashback (https://flashback.se/)
- Datasets collected through Parlai (see Appendix in data paper for complete list) (https://github.com/facebookresearch/ParlAI)
- Pushshift.io Reddit dataset, developed in Baumgartner et al. (2020) and processed in Roller et al. (2021)
- Math
- English Math dataset generated with code from DeepMind (D. Saxton et al., 2019)
- Swedish Math dataset, generated as above with manually translated templates
- Miscellaneous
- Summarization data (https://www.ida.liu.se/~arnjo82/papers/clarin-21-julius.pdf)
- OPUS, the open parallel corpus (https://opus.nlpl.eu/)
- Movie scripts (https://github.com/Aveek-Saha/Movie-Script-Database)
- Natural Instructions (https://github.com/allenai/natural-instructions)
- P3 (Public Pool of Prompts), (https://huggingface.co/datasets/bigscience/P3)
- The Norwegian Colossal Corpus (https://huggingface.co/datasets/NbAiLab/NCC)
- Danish Gigaword (https://gigaword.dk/)
- Icelandic Gigaword (https://clarin.is/en/resources/gigaword/)
- The Pile: Stack Exchange
- Web Common Crawl
- Web data from the project LES (Linguistic Explorations of Societies, https://les.gu.se).
- Multilingual C4 (MC4), prepared by AllenAI from C4 (C. Raffel et al., 2019)
- Open Super-large Crawled Aggregated coRpus (OSCAR) (P. O. Suarez, 2019)
- The Pile: Open Web Text
- Web Sources
- Various public Swedish website scrapes (see Appendix in data paper)
- Familjeliv Articles
- Public Swedish Job Ads from JobTech/Arbetsförmedlingen
- Wikipedia
- Official Wikipedia dumps
- How many instances are there in total (of each type, if appropriate)? The training data consists of 1.1TB UTF-8 encoded text, containing 660M documents with a total of 320B tokens.
- Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? If the dataset is a sample, then what is the larger set? Is the sample representative of the larger set (e.g., geographic coverage)? If so, please describe how this representativeness was validated/verified. If it is not representative of the larger set, please describe why not (e.g., to cover a more diverse range of instances, because instances were withheld or unavailable). The subset of our dataset that comes from multilingual Common Crawl datasets (MC4, Oscar), are filtered by language to only include Swedish, Norwegian, Danish, and Icelandic. From The Pile, we included only the parts that typically are of highest textual quality or complemented the rest of our dataset with sources we otherwise lacked (e.g. books). The remainder of the dataset was collected from the above sources.
- What data does each instance consist of? “Raw” data (e.g., unprocessed text or images) or features? In either case, please provide a description. Each instance consists of raw text data.
- Is there a label or target associated with each instance? If so, please provide a description. No.
- Is any information missing from individual instances? If so, please provide a description, explaining why this information is missing (e.g., because it was unavailable). This does not include intentionally removed information, but might include, e.g., redacted text. No.
- Are relationships between individual instances made explicit (e.g., users’ movie ratings, social network links)? If so, please describe how these relationships are made explicit. There are no explicit relationships between individual instances.
- Are there recommended data splits (e.g., training, development/validation, testing)? If so, please provide a description of these splits, explaining the rationale behind them. There are no explicit splits recommended for this dataset. When pre-training the model, a random split for train, dev, test is set to 99.99%, 0.08%, 0.02% respectively, and is sampled proportionally to each subset’s weight and size. The weight of each subset was manually decided beforehand. These decisions were made considering the data’s value, source, and language, to form a representative and balanced pre-training corpus.
- Are there any errors, sources of noise, or redundancies in the dataset? If so, please provide a description. The dataset is a collection of many sources, some of which naturally contain some overlap. Although we have performed deduplication, some overlap may still remain. Furthermore, there may be some noise remaining from artifacts originating in Common Crawl datasets, that have been missed by our data filtering process. Except for these, we are not aware of any errors, sources of noise, or redundancies.
- Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)? The dataset is self-contained.
- Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? If so, please describe why. The dataset contains subsets of public Common Crawl, Reddit, Familjeliv and Flashback. These could contain sentences that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety.
- Does the dataset relate to people? If not, you may skip the remaining questions in this section. Some documents of this data relate to people, such as news articles, Wikipedia descriptions, etc.
- Does the dataset identify any subpopulations (e.g., by age, gender)? If so, please describe how these subpopulations are identified and provide a description of their respective distributions within the dataset. No, the dataset does not explicitly include subpopulation identification.
- Any other comments? No.
# Collection Process
- How was the data associated with each instance acquired? Was the data directly observable (e.g., raw text, movie ratings), reported by subjects (e.g., survey responses), or indirectly inferred/derived from other data (e.g., part-of-speech tags, model-based guesses for age or language)? If data was reported by subjects or indirectly inferred/derived from other data, was the data validated/verified? If so, please describe how. N/A. The dataset is a union of publicly available datasets and sources.
- What mechanisms or procedures were used to collect the data (e.g., hardware apparatus or sensor, manual human curation, software program, software API)? How were these mechanisms or procedures validated? The data was downloaded from the internet.
- If the dataset is a sample from a larger set, what was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)? Please see previous answers for how parts of the dataset were selected.
- Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)? This data is mined, filtered and sampled by machines.
- Over what timeframe was the data collected? Does this timeframe match the creation timeframe of the data associated with the instances (e.g., recent crawl of old news articles)? If not, please describe the timeframe in which the data associated with the instances was created. The dataset was collected during the period June 2021 to June 2022. The creation of the collected sources varies, with e.g. Common Crawl data that have been continuously collected over 12 years.
- Does the dataset relate to people? If not, you may skip the remainder of the questions in this section. Yes. The texts have been produced by people. Any personal information potentially present in publicly available data sources and thus in the created dataset is of no interest to the collection and use of the dataset.
- Has an analysis of the potential impact of the dataset and its use on data subjects (e.g., a data protection impact analysis) been conducted? If so, please provide a description of this analysis, including the outcomes, as well as a link or other access point to any supporting documentation. Yes.
- Any other comments? No.
- Preprocessing/cleaning/labeling
- Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)? If so, please provide a description. If not, you may skip the remainder of the questions in this section. The dataset was filtered and re-formatted on a document-level using standard procedures, inspired by the work in The BigScience ROOTS Corpus (H. Laurençon et al., 2022) and Gopher (J. W. Rae et al., 2022). This was done with the goal of achieving a consistent text format throughout the dataset, and to remove documents that did not meet our textual quality requirements (e.g. repetitiveness). Furthermore, the dataset was deduplicated to remedy the overlap between collected subsets using the MinHash algorithm, similar to the method used in GPT-3 and The Pile, and described in greater detail in “Deduplicating Training Data Makes Language Models Better” (K. Lee et al., 2021).
- Was the “raw” data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)? If so, please provide a link or other access point to the “raw” data. The “raw” component datasets are publicly available in their respective locations.
- Any other comments? No.
# Uses
- Has the dataset been used for any tasks already? If so, please provide a description. The dataset was used to pre-train the GPT-SW3 models.
- Is there a repository that links to any or all papers or systems that use the dataset? If so, please provide a link or other access point. N/A.
- What (other) tasks could the dataset be used for? The data can be used to pre-train language models, which are foundations for many current and future language tasks.
- Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses? For example, is there anything that a future user might need to know to avoid uses that could result in unfair treatment of individuals or groups (e.g., stereotyping, quality of service issues) or other undesirable harms (e.g., financial harms, legal risks) If so, please provide a description. Is there anything a future user could do to mitigate these undesirable harms? The dataset is probably quite representative of Swedish internet discourse in general, and of the Swedish public sector, but we know that this data does not necessarily reflect the entire Swedish population.
- Are there tasks for which the dataset should not be used? If so, please provide a description. None that we are currently aware of.
- Any other comments? No.
# Distribution
- Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created? If so, please provide a description. No.
- How will the dataset distributed (e.g., tarball on website, API, GitHub)? Does the dataset have a digital object identifier (DOI)? N/A.
- When will the dataset be distributed? N/A.
- Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)? If so, please describe this license and/or ToU, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms or ToU, as well as any fees associated with these restrictions. N/A.
- Do any export controls or other regulatory restrictions apply to the dataset or to individual instances? If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any supporting documentation. N/A.
- Any other comments? No.
# Maintenance
- Who is supporting/hosting/maintaining the dataset? AI Sweden at Lindholmen Science Park AB.
- How can the owner/curator/manager of the dataset be contacted (e.g., email address)? [email protected]
- Is there an erratum? If so, please provide a link or other access point. N/A.
- Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances)? If so, please describe how often, by whom, and how updates will be communicated to users (e.g., mailing list, GitHub)? Currently, there are no plans for updating the dataset.
- If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g., were individuals in question told that their data would be retained for a fixed period of time and then deleted)? If so, please describe these limits and explain how they will be enforced. Read the privacy policy for the NLU initiative at AI Sweden [here](https://www.ai.se/en/privacy-policy-nlu).
- Will older versions of the dataset continue to be supported/hosted/maintained? If so, please describe how. If not, please describe how its obsolescence will be communicated to users. N/A.
- If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so? If so, please provide a description. Will these contributions be validated/ verified? If so, please describe how. If not, why not? Is there a process for communicating/ distributing these contributions to other users? If so, please provide a description. Not at this time.
- Any other comments? No.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_AI-Sweden-Models__gpt-sw3-1.3b)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 29.99 |
| ARC (25-shot) | 30.38 |
| HellaSwag (10-shot) | 50.4 |
| MMLU (5-shot) | 26.14 |
| TruthfulQA (0-shot) | 39.97 |
| Winogrande (5-shot) | 58.88 |
| GSM8K (5-shot) | 0.08 |
| DROP (3-shot) | 4.08 |
|
hoanghoavienvo/roberta-base-detect-cheapfake-ca1-co2 | hoanghoavienvo | 2024-01-29T13:16:56Z | 91 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-01-29T13:09:31Z | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: roberta-base-detect-cheapfake-ca1-co2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-detect-cheapfake-ca1-co2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2524
- Accuracy: 0.94
- F1: 0.9388
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 38 | 0.6794 | 0.645 | 0.4409 |
| No log | 2.0 | 76 | 0.4563 | 0.82 | 0.8022 |
| No log | 3.0 | 114 | 0.3002 | 0.875 | 0.8718 |
| No log | 4.0 | 152 | 0.3539 | 0.87 | 0.8762 |
| No log | 5.0 | 190 | 0.2854 | 0.89 | 0.8804 |
| No log | 6.0 | 228 | 0.2317 | 0.935 | 0.9333 |
| No log | 7.0 | 266 | 0.2419 | 0.93 | 0.9293 |
| No log | 8.0 | 304 | 0.2514 | 0.935 | 0.9326 |
| No log | 9.0 | 342 | 0.2549 | 0.935 | 0.9340 |
| No log | 10.0 | 380 | 0.2524 | 0.94 | 0.9388 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
archit11/videomae-base-finetuned-fight-nofight-subset | archit11 | 2024-01-29T13:14:31Z | 46 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2024-01-29T12:46:29Z | ---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
model-index:
- name: videomae-base-finetuned-fight-nofight-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-fight-nofight-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
## Model description
Used to detect fights in cctv footage
## Intended uses & limitations
USE HOWEVER YOU WANT!!!
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 102
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
Marcus2112/rl_course_vizdoom_health_gathering_supreme | Marcus2112 | 2024-01-29T13:02:26Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-01-29T13:02:14Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 13.00 +/- 5.48
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r Marcus2112/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
golesheed/whisper-non-native-children-2-dutch | golesheed | 2024-01-29T12:53:41Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"nl",
"base_model:openai/whisper-large-v2",
"base_model:finetune:openai/whisper-large-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-01-29T08:33:51Z | ---
language:
- nl
license: apache-2.0
base_model: openai/whisper-large-v2
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper Large V2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large V2
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4076
- Wer: 12.3813
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.6749 | 0.71 | 30 | 0.3798 | 17.3625 |
| 0.26 | 1.43 | 60 | 0.3843 | 14.0477 |
| 0.163 | 2.14 | 90 | 0.3617 | 12.5963 |
| 0.0743 | 2.86 | 120 | 0.3539 | 13.2234 |
| 0.0429 | 3.57 | 150 | 0.3883 | 14.4598 |
| 0.024 | 4.29 | 180 | 0.4002 | 14.1014 |
| 0.011 | 5.0 | 210 | 0.4076 | 12.3813 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.15.0
|
atom-mu-control/a2c-PandaReachDense-v3 | atom-mu-control | 2024-01-29T12:52:16Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-01-29T12:47:32Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.21 +/- 0.13
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Luna-Skywalker/rice_dtect | Luna-Skywalker | 2024-01-29T12:49:38Z | 0 | 0 | fastai | [
"fastai",
"region:us"
] | null | 2024-01-29T12:49:31Z | ---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
danwils/mala-alpaca-lora | danwils | 2024-01-29T12:38:46Z | 0 | 0 | null | [
"region:us"
] | null | 2024-01-29T12:30:31Z | library_name: peft
Training procedure
The following bitsandbytes quantization config was used during training:
load_in_8bit: False
load_in_4bit: True
llm_int8_threshold: 6.0
llm_int8_skip_modules: None
llm_int8_enable_fp32_cpu_offload: False
llm_int8_has_fp16_weight: False
bnb_4bit_quant_type: nf4
bnb_4bit_use_double_quant: False
bnb_4bit_compute_dtype: float16
The following bitsandbytes quantization config was used during training:
load_in_8bit: False
load_in_4bit: True
llm_int8_threshold: 6.0
llm_int8_skip_modules: None
llm_int8_enable_fp32_cpu_offload: False
llm_int8_has_fp16_weight: False
bnb_4bit_quant_type: nf4
bnb_4bit_use_double_quant: False
bnb_4bit_compute_dtype: float16
Framework versions
PEFT 0.4.0
PEFT 0.4.0 |
Owentaku/distilbert-base-uncased-finetuned-imdb | Owentaku | 2024-01-29T12:36:13Z | 92 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-01-26T09:56:28Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4118
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7024 | 1.0 | 157 | 2.4965 |
| 2.5792 | 2.0 | 314 | 2.4280 |
| 2.5354 | 3.0 | 471 | 2.4508 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
rdzotz/w2v-bert-2.0-mongolian-colab-CV16.0 | rdzotz | 2024-01-29T12:24:58Z | 77 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-01-29T11:08:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
m0saan/distilbert-base-uncased-finetuned-emotion | m0saan | 2024-01-29T12:22:41Z | 92 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-08T23:48:33Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9239930965254398
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2225
- Accuracy: 0.924
- F1: 0.9240
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8255 | 1.0 | 250 | 0.3306 | 0.907 | 0.9056 |
| 0.256 | 2.0 | 500 | 0.2225 | 0.924 | 0.9240 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
karthikrathod/llm_repo_v3 | karthikrathod | 2024-01-29T12:16:49Z | 0 | 0 | null | [
"safetensors",
"autotrain",
"text-generation",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-29T12:16:31Z | ---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
tanatapanun/fine-tuned-BioBARTv2-20-epochs-1024-input-192-output | tanatapanun | 2024-01-29T12:13:01Z | 91 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-01-29T11:40:16Z | ---
base_model: checkpoint_global_step_200000
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: fine-tuned-BioBARTv2-20-epochs-1024-input-192-output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-BioBARTv2-20-epochs-1024-input-192-output
This model is a fine-tuned version of [checkpoint_global_step_200000](https://huggingface.co/checkpoint_global_step_200000) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2139
- Rouge1: 0.1795
- Rouge2: 0.0354
- Rougel: 0.1282
- Rougelsum: 0.1304
- Gen Len: 38.23
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 151 | 6.6012 | 0.0046 | 0.0001 | 0.0045 | 0.0046 | 8.77 |
| No log | 2.0 | 302 | 1.4692 | 0.1068 | 0.0318 | 0.0933 | 0.0937 | 29.33 |
| No log | 3.0 | 453 | 1.2563 | 0.0982 | 0.0232 | 0.0736 | 0.0745 | 36.78 |
| 4.4063 | 4.0 | 604 | 1.1824 | 0.1163 | 0.033 | 0.0884 | 0.0888 | 31.46 |
| 4.4063 | 5.0 | 755 | 1.1451 | 0.1667 | 0.0343 | 0.1302 | 0.1308 | 42.16 |
| 4.4063 | 6.0 | 906 | 1.1288 | 0.1428 | 0.0268 | 0.1118 | 0.1124 | 33.8 |
| 0.9455 | 7.0 | 1057 | 1.1192 | 0.1474 | 0.035 | 0.1089 | 0.1098 | 39.35 |
| 0.9455 | 8.0 | 1208 | 1.1202 | 0.1598 | 0.0354 | 0.1227 | 0.1245 | 37.79 |
| 0.9455 | 9.0 | 1359 | 1.1227 | 0.1683 | 0.0312 | 0.1236 | 0.1247 | 50.05 |
| 0.6533 | 10.0 | 1510 | 1.1241 | 0.1744 | 0.0447 | 0.1341 | 0.1364 | 39.0 |
| 0.6533 | 11.0 | 1661 | 1.1321 | 0.1703 | 0.0411 | 0.1273 | 0.1285 | 42.15 |
| 0.6533 | 12.0 | 1812 | 1.1465 | 0.1756 | 0.0343 | 0.1258 | 0.1277 | 34.42 |
| 0.6533 | 13.0 | 1963 | 1.1560 | 0.1854 | 0.0442 | 0.1381 | 0.14 | 38.38 |
| 0.455 | 14.0 | 2114 | 1.1690 | 0.1913 | 0.0388 | 0.1371 | 0.1398 | 39.29 |
| 0.455 | 15.0 | 2265 | 1.1845 | 0.1688 | 0.0305 | 0.1205 | 0.1226 | 34.5 |
| 0.455 | 16.0 | 2416 | 1.1860 | 0.1913 | 0.039 | 0.1345 | 0.136 | 41.5 |
| 0.3282 | 17.0 | 2567 | 1.1955 | 0.1782 | 0.0344 | 0.1243 | 0.1266 | 43.78 |
| 0.3282 | 18.0 | 2718 | 1.2108 | 0.1796 | 0.0386 | 0.1295 | 0.1319 | 37.64 |
| 0.3282 | 19.0 | 2869 | 1.2160 | 0.1762 | 0.034 | 0.1255 | 0.1277 | 38.08 |
| 0.2642 | 20.0 | 3020 | 1.2139 | 0.1795 | 0.0354 | 0.1282 | 0.1304 | 38.23 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.12.1+cu113
- Datasets 2.16.1
- Tokenizers 0.15.1
|
Federic/lora-fine-tuning-llama2-SQL-codellama-TEST | Federic | 2024-01-29T12:09:58Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:finetune:codellama/CodeLlama-7b-hf",
"license:llama2",
"region:us"
] | null | 2024-01-29T09:04:58Z | ---
license: llama2
base_model: codellama/CodeLlama-7b-hf
tags:
- generated_from_trainer
model-index:
- name: lora-fine-tuning-llama2-SQL-codellama-TEST
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lora-fine-tuning-llama2-SQL-codellama-TEST
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
linhcuem/checker_TB_yolov8_ver3 | linhcuem | 2024-01-29T12:06:11Z | 1 | 0 | ultralytics | [
"ultralytics",
"tensorboard",
"v8",
"ultralyticsplus",
"yolov8",
"yolo",
"vision",
"object-detection",
"pytorch",
"model-index",
"region:us"
] | object-detection | 2024-01-29T12:05:48Z |
---
tags:
- ultralyticsplus
- yolov8
- ultralytics
- yolo
- vision
- object-detection
- pytorch
library_name: ultralytics
library_version: 8.0.43
inference: false
model-index:
- name: linhcuem/checker_TB_yolov8_ver3
results:
- task:
type: object-detection
metrics:
- type: precision # since [email protected] is not available on hf.co/metrics
value: 0.94985 # min: 0.0 - max: 1.0
name: [email protected](box)
---
<div align="center">
<img width="640" alt="linhcuem/checker_TB_yolov8_ver3" src="https://huggingface.co/linhcuem/checker_TB_yolov8_ver3/resolve/main/thumbnail.jpg">
</div>
### Supported Labels
```
['bom_gen', 'bom_jn', 'bom_knp', 'bom_sachet', 'bom_vtgk', 'bom_ytv', 'hop_dln', 'hop_jn', 'hop_vtg', 'hop_ytv', 'lo_kids', 'lo_ytv', 'loc_dln', 'loc_jn', 'loc_kids', 'loc_ytv', 'pocky', 'tui_gen', 'tui_jn', 'tui_sachet', 'tui_vtgk']
```
### How to use
- Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus):
```bash
pip install ultralyticsplus==0.0.28 ultralytics==8.0.43
```
- Load model and perform prediction:
```python
from ultralyticsplus import YOLO, render_result
# load model
model = YOLO('linhcuem/checker_TB_yolov8_ver3')
# set model parameters
model.overrides['conf'] = 0.25 # NMS confidence threshold
model.overrides['iou'] = 0.45 # NMS IoU threshold
model.overrides['agnostic_nms'] = False # NMS class-agnostic
model.overrides['max_det'] = 1000 # maximum number of detections per image
# set image
image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
results = model.predict(image)
# observe results
print(results[0].boxes)
render = render_result(model=model, image=image, result=results[0])
render.show()
```
|
Marcus2112/LunarLander-v2 | Marcus2112 | 2024-01-29T11:55:18Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-01-29T11:54:59Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 280.72 +/- 14.75
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
HarikaR/swedish | HarikaR | 2024-01-29T11:50:49Z | 173 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"megatron-bert",
"text-classification",
"generated_from_trainer",
"base_model:KBLab/megatron-bert-large-swedish-cased-165-zero-shot",
"base_model:finetune:KBLab/megatron-bert-large-swedish-cased-165-zero-shot",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-01-29T11:46:23Z | ---
base_model: KBLab/megatron-bert-large-swedish-cased-165-zero-shot
tags:
- generated_from_trainer
model-index:
- name: swedish
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swedish
This model is a fine-tuned version of [KBLab/megatron-bert-large-swedish-cased-165-zero-shot](https://huggingface.co/KBLab/megatron-bert-large-swedish-cased-165-zero-shot) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
Trelis/mamba-2.8b-slimpj-bf16 | Trelis | 2024-01-29T11:49:23Z | 67 | 0 | transformers | [
"transformers",
"pytorch",
"mamba",
"bf16",
"16bit",
"text-generation",
"en",
"dataset:cerebras/SlimPajama-627B",
"arxiv:2312.00752",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-29T11:39:02Z | ---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
inference: true
tags:
- mamba
- bf16
- 16bit
datasets:
- cerebras/SlimPajama-627B
---
# Mamba 2.8b Slim Pyjama - bf16 (16-bit)
This is a 16 bit version of [Mamba-2.8b-slimpj](https://huggingface.co/state-spaces/mamba-2.8b-slimpj/)
Mamba-2.8b-slimpj is a model using the [Mamba](https://arxiv.org/abs/2312.00752) architecture, with 2.8B parameters, trained for 600B tokens on the SlimPajama dataset.
Model code: https://github.com/state-spaces/mamba/tree/main
To load the model, follow the installation instruction in the code repo, and then:
```
from mamba_ssm.models.mixer_seq_simple import MambaLMHeadModel
model = MambaLMHeadModel.from_pretrained("state-spaces/mamba-2.8b-slimpj")
```
## Inference Notebook (Colab)
- [Notebook here](https://colab.research.google.com/drive/1GsDbbkDTDpia_Dc8s-7bwEn_GrpkBVO4?usp=sharing) |
swapnasa/intent_data | swapnasa | 2024-01-29T11:45:35Z | 194 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:dccuchile/bert-base-spanish-wwm-uncased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-01-29T11:45:18Z | ---
base_model: dccuchile/bert-base-spanish-wwm-uncased
tags:
- generated_from_trainer
model-index:
- name: intent_data
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# intent_data
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
adalib/torchdata-oss-codegen-350M-mono-prefix | adalib | 2024-01-29T11:42:49Z | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Salesforce/codegen-350M-mono",
"base_model:adapter:Salesforce/codegen-350M-mono",
"region:us"
] | null | 2024-01-29T11:15:40Z | ---
library_name: peft
base_model: Salesforce/codegen-350M-mono
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
e22vvb/ALL_mt5-base_5_spider_15_wikiSQL | e22vvb | 2024-01-29T11:33:34Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-01-29T03:32:16Z | ---
tags:
- generated_from_trainer
model-index:
- name: ALL_mt5-base_5_spider_15_wikiSQL
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ALL_mt5-base_5_spider_15_wikiSQL
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0535
- Rouge2 Precision: 0.7644
- Rouge2 Recall: 0.5113
- Rouge2 Fmeasure: 0.5798
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.4225 | 1.0 | 875 | 0.1196 | 0.6114 | 0.3983 | 0.4541 |
| 0.1491 | 2.0 | 1750 | 0.0822 | 0.6989 | 0.4688 | 0.5304 |
| 0.1128 | 3.0 | 2625 | 0.0640 | 0.7302 | 0.4905 | 0.5547 |
| 0.0972 | 4.0 | 3500 | 0.0560 | 0.7542 | 0.5066 | 0.5733 |
| 0.0914 | 5.0 | 4375 | 0.0535 | 0.7644 | 0.5113 | 0.5798 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.7.dev0
- Tokenizers 0.13.3
|
lordjia/lelo-lego-lora-for-xl-sd1-5 | lordjia | 2024-01-29T11:33:03Z | 36 | 3 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"style",
"lego",
"toy",
"brickheadz",
"minifigures",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] | text-to-image | 2024-01-29T11:33:00Z | ---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Rent&allowDerivatives=True&allowDifferentLicense=True
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- style
- lego
- toy
- brickheadz
- minifigures
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: LEGO MiniFig
widget:
- text: ' '
output:
url: >-
6016204.jpeg
- text: '
LEGO MiniFig, A man in a vintage early-20th-century setting, possibly from a period film, stands prominently in the foreground. He wears a dark grey suit, a light blue shirt, a darker blue tie, and a classic fedora hat, conveying an aura of authority and composure. His face shows determination, with sharp features, short dark hair, and an intense gaze. The badge labeled ''K-6'' on his lapel suggests he might have an official or investigative role. The background is an open, dusty street typical of an old western town at dusk, with wooden buildings lining the side and a few people in period attire visible in the distance. The lighting is soft and golden, indicative of a setting sun, giving a cinematic quality to the scene.'
output:
url: >-
6015806.jpeg
- text: '
LEGO Creator, The Space Needle in seattle city view'
output:
url: >-
6015809.jpeg
- text: '
LEGO MiniFig.
A woman with platinum blonde hair exudes Barbie doll charm in a pink cowgirl outfit with a matching hat, her joyful stance enhancing the doll-like atmosphere around her'
output:
url: >-
6015819.jpeg
- text: '
LEGO BrickHeadz, a red chinese dragon in a lunar new year'
output:
url: >-
6015841.jpeg
- text: ' LEGO Creator,
Chinese traditional lion dance, on the street in the early morning'
output:
url: >-
6015843.jpeg
- text: '
LEGO MiniFig.
A man with curly hair and a whimsical expression wears a velvet top hat and red coat amidst a colorful, dreamlike backdrop'
output:
url: >-
6015846.jpeg
- text: ' LEGO BrickHeadz,
In a field of golden wheat, a large doll with black short hair wears a yellow short-sleeve shirt topped with an orange suspender dress. White knee-high socks and black shoes adorn its feet, under a bare tree.'
output:
url: >-
6015849.jpeg
- text: '
LEGO MiniFig.
A man with platinum blonde hair and an athletic build wears a luxurious white fur coat over bare chest, accessorized with a headband and necklace, in a vibrant, stylized setting.'
output:
url: >-
6015860.jpeg
- text: ' LEGO BrickHeadz,
In a field of golden wheat, a large doll with black short hair wears a yellow short-sleeve shirt topped with an orange suspender dress. White knee-high socks and black shoes adorn its feet, under a bare tree.'
output:
url: >-
6016479.jpeg
---
# LeLo - LEGO LoRA for XL & SD1.5
<Gallery />
## Model description
<p><strong>LeLo</strong> stands for <strong>LEGO LoRA</strong>. It is a LoRA trained with over 900 images from the LEGO MiniFigures, BrickHeadz, and Creator themes. It provides a simulation of the LEGO design style.</p><p></p><p><strong><em>Update:</em></strong></p><p>The <strong><u>V2.1_SDXL1.0</u></strong> version expands the training image dataset to 893 high-resolution images, further enhancing the model's drawing quality. Starting from this version, I will no longer provide updates for the corresponding SD1.5 version.</p><p>This version is released to celebrate the Lunar Year of the Dragon. Wishing everyone a happy holiday and prosperous fortune!</p><p></p><p><strong><em>Usage:</em></strong></p><p><span style="color:rgb(253, 126, 20)">Trigger words:</span></p><ul><li><p><strong><u>LEGO MiniFig, {prompt}</u></strong>: MiniFigures theme, suitable for human figures and anthropomorphic animal images.</p></li><li><p><strong><u>LEGO BrickHeadz, {prompt}</u></strong>: BrickHeadz theme, suitable for human figures and anthropomorphic animal images.</p></li><li><p><strong><u>LEGO Creator, {prompt}</u></strong>: Creator theme, widely applicable to objects, animals, plants, buildings, etc.</p></li></ul><p><span style="color:rgb(253, 126, 20)">LoRA Weight: </span>Between 0.6-1.0, recommended to use <strong><u>0.8</u></strong>.</p><p><span style="color:rgb(253, 126, 20)">Denoising: </span>Recommended to use <strong><u>0.3</u></strong>.</p><p><span style="color:rgb(253, 126, 20)">Base model: </span>The <strong><u>V2.1_SDXL1.0</u></strong> version should be paired with the SDXL 1.0 series base models. For the SD 1.5 versions, it is recommended to use it in conjunction with the <strong><u>Realistic Vision</u></strong> series base models, but you can also try pairing it with other base models to explore different styles.</p><p></p><p>If you are interested in my other works, please visit my personal website at <a target="_blank" rel="ugc" href="https://ai.chao.cool">https://ai.chao.cool</a>. There, I have listed my main achievements in Stable Diffusion LoRA and GPTs of ChatGPT.</p><hr /><p><strong>LeLo</strong> 为 <strong>LEGO LoRA</strong> 简称。此 LoRA 模型使用了 900 多张 LEGO MiniFigures 系列,BrickHeadz 系列,和 Creator 系列产品图片训练而成,提供对乐高设计风格的模拟。</p><p></p><p><strong><em>更新说明:</em></strong></p><p><strong><u>V2.1_SDXL1.0</u></strong> 版扩充训练图像数据集至893张高分辨图片,进一步提升模型绘图质量。从这个版本开始,我将不再提供对应的 SD1.5 版本更新。</p><p>此版本为庆祝农历龙年新年而发布,祝大家节日快乐,恭喜发财!</p><p></p><p><strong><em>使用方法:</em></strong></p><p><span style="color:rgb(253, 126, 20)">触发词(Trigger words):</span></p><ul><li><p><strong><u>LEGO MiniFig, {prompt}</u></strong>:人仔风格,适用于人物形象和拟人的动物形象。</p></li><li><p><strong><u>LEGO BrickHeadz, {prompt}</u></strong>:方头仔风格,适用于人物形象和拟人的动物形象。</p></li><li><p><strong><u>LEGO Creator, {prompt}</u></strong>: 乐高创意系列风格,广泛适用于物品、动物、植物、建筑等。</p></li></ul><p><span style="color:rgb(253, 126, 20)">LoRA 权重(weight):</span>0.6-1.0 之间,推荐使用 <strong><u>0.8</u></strong></p><p><span style="color:rgb(253, 126, 20)">放大重绘幅度(Denoising):</span>推荐使用 <strong><u>0.3</u></strong></p><p><span style="color:rgb(253, 126, 20)">基础模型:</span><strong><u>V2.1_SDXL1.0</u></strong> 版请配合 SDXL 1.0 系列基础模型。SD 1.5 版本推荐与 <strong><u>Realistic Vision</u></strong> 系列基础模型配合使用,也可尝试与其他基础模型配合使用以探索不同的风格。</p><p></p><p>如果你的我其他作品感兴趣,请移步我的个人网站(<a target="_blank" rel="ugc" href="https://ai.chao.cool/zh-cn/">https://ai.chao.cool/zh-cn/</a>)。在那里列出了我在 Stable Diffusion LoRA 以及 ChatGPT 定制化 GPTs 方面的主要工作成果。</p>
## Trigger words
You should use `LEGO MiniFig`, `LEGO Creator`, `LEGO BrickHeadz` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/lordjia/lelo-lego-lora-for-xl-sd1-5/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('lordjia/lelo-lego-lora-for-xl-sd1-5', weight_name='Lego_XL_v2.1.safetensors')
image = pipeline(' LEGO BrickHeadz,
In a field of golden wheat, a large doll with black short hair wears a yellow short-sleeve shirt topped with an orange suspender dress. White knee-high socks and black shoes adorn its feet, under a bare tree.').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
lordjia/feng-zi-kai-man-hua-by-feng-zikai | lordjia | 2024-01-29T11:30:50Z | 11 | 3 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"style",
"ink painting",
"comic",
"manhua",
"feng zikai",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] | text-to-image | 2024-01-29T11:30:46Z | ---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Rent&allowDerivatives=True&allowDifferentLicense=True
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- style
- ink painting
- comic
- manhua
- feng zikai
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: by Feng Zikai
widget:
- text: 'Batman at night, by Feng Zikai '
output:
url: >-
3495572.jpeg
- text: 'iron man flying over the city, by Feng Zikai '
output:
url: >-
3495565.jpeg
- text: 'spider man jumping among buildings, by Feng Zikai '
output:
url: >-
3495566.jpeg
- text: 'wonder woman fighting against super man, by Feng Zikai '
output:
url: >-
3495571.jpeg
- text: 'Arcee in Transformers, by Feng Zikai '
output:
url: >-
3495811.jpeg
- text: 'Hot Rod in Transformers, by Feng Zikai '
output:
url: >-
3495810.jpeg
- text: 'Bumblebee, Transformers, by Feng Zikai '
output:
url: >-
3495813.jpeg
- text: 'Optimus Prime, Transformers, by Feng Zikai '
output:
url: >-
3495812.jpeg
- text: 'city view of hongkong, bay view, at night, by Feng Zikai '
output:
url: >-
3495822.jpeg
- text: 'city view of london, river view, by Feng Zikai '
output:
url: >-
3495823.jpeg
---
# 丰子恺漫画 - By FENG Zikai
<Gallery />
([CivitAI](https://civitai.com/models/194494))
## Model description
<p>丰子恺(<a target="_blank" rel="ugc" href="https://zh.wikipedia.org/zh-hans/%E8%B1%90%E5%AD%90%E6%84%B7">维基百科</a>),中国散文家、画家、文学家、美术家与音乐教育家。师从弘一法师李叔同,以中西融合画法创作漫画及散文而著名,是中国漫画艺术的先驱。他的漫画造形简约,画风朴实,饶富童趣,在众多画家中,独树一格。此 LoRA 意在模仿其标志性水墨漫画风格,希望给喜欢丰子恺作品的朋友带来快乐。</p><p>基于 <strong><span style="color:rgb(253, 126, 20)">SDXL 1.0</span></strong> checkpoint。使用时,请加上触发词:<strong><span style="color:rgb(253, 126, 20)">by Feng Zikai</span></strong>,推荐权重(weight)<strong><span style="color:rgb(253, 126, 20)">0.8-1.0</span></strong></p><p>Prompt 示例:</p><pre><code>Batman at night, by Feng Zikai <lora:fengzikai_v1.0_XL:0.8></code></pre><hr /><p>Feng Zikai (<a target="_blank" rel="ugc" href="https://en.wikipedia.org/wiki/Feng_Zikai">Wikipedia</a>), Chinese essayist, painter, writer, artist and music educator. He studied under Master Hongyi Li Shutong and is famous for his comics and prose creations that combine Chinese and Western painting techniques. He is a pioneer of Chinese comics art. His comics have simple shapes, simple style, and full of childishness, making him unique among many painters. This LoRA is intended to imitate his iconic ink comic style, hoping to bring happiness to friends who like Feng Zikai’s works.</p><p>Based on <strong><span style="color:rgb(253, 126, 20)">SDXL 1.0</span></strong> checkpoint. When using, please add trigger words: <strong><span style="color:rgb(253, 126, 20)">by Feng Zikai</span></strong><span style="color:rgb(209, 213, 219)">, with a recommended weight of </span><strong><span style="color:rgb(253, 126, 20)">0.8~1.0</span></strong><span style="color:rgb(209, 213, 219)">.</span></p><p>Prompt sample:</p><pre><code>Batman at night, by Feng Zikai <lora:fengzikai_v1.0_XL:0.8></code></pre>
## Trigger words
You should use `by Feng Zikai` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/lordjia/feng-zi-kai-man-hua-by-feng-zikai/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('lordjia/feng-zi-kai-man-hua-by-feng-zikai', weight_name='fengzikai_v1.0_XL.safetensors')
image = pipeline('city view of london, river view, by Feng Zikai ').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
lordjia/stereoscopic-portrait-3d-li-ti-xiao-xiang | lordjia | 2024-01-29T11:30:34Z | 6 | 3 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"concept",
"portrait",
"parallelview",
"stereoscopic",
"crossview",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:other",
"region:us"
] | text-to-image | 2024-01-29T11:30:33Z | ---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Rent&allowDerivatives=True&allowDifferentLicense=True
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- concept
- portrait
- parallelview
- stereoscopic
- crossview
base_model: runwayml/stable-diffusion-v1-5
instance_prompt:
widget:
- text: 'monkey king,
masterpiece, high detail, 8k, high detailed skin, 8k uhd, high quality
'
output:
url: >-
2690164.jpeg
- text: 'a young lady holding a cat, full body
masterpiece, high detail, 8k, high detailed skin, 8k uhd, high quality
'
output:
url: >-
2690163.jpeg
- text: 'a young woman with glasses in campus, half body,
masterpiece, high detail, 8k, high detailed skin, 8k uhd, high quality
'
output:
url: >-
2690160.jpeg
- text: 'a young man in a park,
masterpiece, high detail, 8k, high detailed skin, 8k uhd, high quality
'
output:
url: >-
2690167.jpeg
- text: 'a princess in front of a castle, half body
masterpiece, high detail, 8k, high detailed skin, 8k uhd, high quality
'
output:
url: >-
2690166.jpeg
---
# Stereoscopic Portrait - 3D 立体肖像
<Gallery />
([CivitAI](https://civitai.com/models/152741))
## Model description
<p>This is an experimental project designed to produce <span style="color:rgb(253, 126, 20)">Stereoscopic Portraits</span>. It supports both <span style="color:rgb(253, 126, 20)">Parallel-view</span> and <span style="color:rgb(253, 126, 20)">Cross-view</span> modes, corresponding to two downloadable versions.</p><p><strong>Performance:</strong></p><p>When used as recommended, the subject of the portrait achieves a relatively high success rate. The background, in contrast, might require multiple attempts and might achieve a lower success rate.</p><p><strong>User Guide:</strong> For optimal results and image quality, strictly follow the procedure outlined below:</p><ul><li><p><strong>Checkpoint:</strong> Realistic Vision V5.1</p></li><li><p><strong>Textual Inversion:</strong> ng_deepnegative_v1_75t</p></li><li><p><strong>Positive prompts:</strong> {prompt}, masterpiece, high detail, 8k, high detailed skin, 8k uhd, high quality</p></li><li><p><strong>Negative prompts:</strong> ugly, disfigured, deformed, worst quality, low quality, ng_deepnegative_v1_75t</p></li><li><p><strong>LoRA Weight:</strong> 0.8-1.0</p></li><li><p><strong>Image Dimensions:</strong> Width: 832, Height: 560</p></li><li><p><strong>Optional:</strong> Use Hires. fix to enhance image quality with a Denoising strength of 0.3.</p></li></ul><p><strong>Note:</strong></p><ul><li><p>Using the Realistic Vision V5.1 in tandem with ng_deepnegative_v1_75t can significantly improve the success rate and generated image quality. See the "Suggested Resources" section below for more details.</p></li><li><p>This LoRA is specifically designed for generating Stereoscopic Portraits with a human subject. Without a human subject, a 3D effect cannot be achieved.</p></li><li><p>For detailed information on Stereoscopy, refer to the Wikipedia entry titled "<a target="_blank" rel="ugc" href="https://en.wikipedia.org/wiki/Stereoscopy">Stereoscopy</a>".</p></li></ul><hr /><p>这是一个实验性项目,用来生成 <span style="color:rgb(253, 126, 20)">立体视觉肖像(Stereoscopic Portrait)</span>,支持 <span style="color:rgb(253, 126, 20)">Parallel-view</span> 与 <span style="color:rgb(253, 126, 20)">Cross-view</span> 两种模式(对应两个下载版本)。</p><p>在按照推荐方法使用的前提下,人物主体可以获得比较高的成功率,背景的成功率相较人物主体要低一些,有时需要多次尝试。</p><p><strong>使用说明:</strong>(严格按照以下流程生成图片,可以获得比较高的成功率与图像质量。)</p><ul><li><p><strong>模型(checkpoint):</strong>Realistic Vision V5.1</p></li><li><p><strong>嵌入式(Textual Inversion):</strong>ng_deepnegative_v1_75t</p></li><li><p><strong>正面提示词:</strong>{prompt}, masterpiece, high detail, 8k, high detailed skin, 8k uhd, high quality</p></li><li><p><strong>负面提示词:</strong>ugly, disfigured, deformed, worst quality, low quality, ng_deepnegative_v1_75t</p></li><li><p><strong>LoRA 权重:</strong>0.8-1.0</p></li><li><p><strong>图像尺寸:</strong>宽度:832,高度:560</p></li><li><p><strong>可选:</strong>使用高清修复(Hires. fix)提升图像质量,重绘幅度(Denoising strength)0.3</p></li></ul><p><strong><em>注意:</em></strong></p><ul><li><p>配合使用 Realistic Vision V5.1 与 ng_deepnegative_v1_75t 可以极大提高成功率与生成图像质量。具体见下方“Suggested Resources”。</p></li><li><p>此 LoRA 只用于生成带有人物主体的立体视觉肖像图,如果不带有人物主体则无法产生立体效果。</p></li><li><p>关于立体视觉(Stereoscopy)的具体说明,参见维基百科“<a target="_blank" rel="ugc" href="https://zh.wikipedia.org/zh-cn/%E7%AB%8B%E4%BD%93%E5%9B%BE">立体图</a>”词条。</p></li></ul>
## Download model
Weights for this model are available in Safetensors format.
[Download](/lordjia/stereoscopic-portrait-3d-li-ti-xiao-xiang/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('runwayml/stable-diffusion-v1-5', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('lordjia/stereoscopic-portrait-3d-li-ti-xiao-xiang', weight_name='stereoscopic_parallel_v1.0.safetensors')
image = pipeline('a princess in front of a castle, half body
masterpiece, high detail, 8k, high detailed skin, 8k uhd, high quality
').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
tanatapanun/fine-tuned-BioBARTv2-20-epochs-1024-input-160-output | tanatapanun | 2024-01-29T11:28:59Z | 89 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-01-29T10:56:10Z | ---
base_model: checkpoint_global_step_200000
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: fine-tuned-BioBARTv2-20-epochs-1024-input-160-output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-BioBARTv2-20-epochs-1024-input-160-output
This model is a fine-tuned version of [checkpoint_global_step_200000](https://huggingface.co/checkpoint_global_step_200000) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4289
- Rouge1: 0.1951
- Rouge2: 0.0471
- Rougel: 0.1473
- Rougelsum: 0.1452
- Gen Len: 39.25
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 151 | 6.6467 | 0.0363 | 0.0002 | 0.0297 | 0.0295 | 19.55 |
| No log | 2.0 | 302 | 1.6612 | 0.1267 | 0.0317 | 0.11 | 0.111 | 24.51 |
| No log | 3.0 | 453 | 1.4393 | 0.0448 | 0.0077 | 0.0372 | 0.0349 | 15.64 |
| 4.5151 | 4.0 | 604 | 1.3648 | 0.1309 | 0.028 | 0.1019 | 0.101 | 30.13 |
| 4.5151 | 5.0 | 755 | 1.3211 | 0.1644 | 0.0365 | 0.1286 | 0.1273 | 56.4 |
| 4.5151 | 6.0 | 906 | 1.3159 | 0.1229 | 0.0234 | 0.0986 | 0.0981 | 29.91 |
| 1.0545 | 7.0 | 1057 | 1.2964 | 0.1601 | 0.0317 | 0.1291 | 0.1283 | 31.4 |
| 1.0545 | 8.0 | 1208 | 1.2933 | 0.1677 | 0.0355 | 0.1259 | 0.1245 | 39.53 |
| 1.0545 | 9.0 | 1359 | 1.3003 | 0.1808 | 0.0416 | 0.1356 | 0.1344 | 39.72 |
| 0.7195 | 10.0 | 1510 | 1.3179 | 0.1751 | 0.0414 | 0.1334 | 0.1327 | 42.87 |
| 0.7195 | 11.0 | 1661 | 1.3323 | 0.1799 | 0.0415 | 0.1385 | 0.1386 | 38.91 |
| 0.7195 | 12.0 | 1812 | 1.3588 | 0.1851 | 0.031 | 0.1467 | 0.1455 | 33.25 |
| 0.7195 | 13.0 | 1963 | 1.3538 | 0.1852 | 0.0435 | 0.1494 | 0.1462 | 36.31 |
| 0.4924 | 14.0 | 2114 | 1.3737 | 0.1779 | 0.0397 | 0.1361 | 0.1361 | 38.17 |
| 0.4924 | 15.0 | 2265 | 1.3735 | 0.1993 | 0.0423 | 0.1441 | 0.1411 | 45.84 |
| 0.4924 | 16.0 | 2416 | 1.3962 | 0.1778 | 0.0403 | 0.1399 | 0.1378 | 33.99 |
| 0.3509 | 17.0 | 2567 | 1.4080 | 0.1667 | 0.0414 | 0.1215 | 0.1188 | 41.35 |
| 0.3509 | 18.0 | 2718 | 1.4148 | 0.1893 | 0.0437 | 0.1437 | 0.1423 | 39.25 |
| 0.3509 | 19.0 | 2869 | 1.4232 | 0.1904 | 0.0417 | 0.1428 | 0.141 | 41.09 |
| 0.2786 | 20.0 | 3020 | 1.4289 | 0.1951 | 0.0471 | 0.1473 | 0.1452 | 39.25 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.12.1+cu113
- Datasets 2.16.1
- Tokenizers 0.15.1
|
ImSaito/llama-2-chistes | ImSaito | 2024-01-29T11:21:52Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2024-01-29T11:03:00Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
jungyuko/DAVinCI-42dot_LLM-PLM-1.3B-v0.73 | jungyuko | 2024-01-29T11:19:26Z | 140 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-29T11:06:35Z | ---
license: cc-by-nc-4.0
---
## DAVinCI-42dot_LLM-PLM-1.3B-v0.73
This model is a fine-tuned version of [42dot/42dot_LLM-PLM-1.3B](https://huggingface.co/42dot/42dot_LLM-PLM-1.3B) on an unknown dataset.
### Model description
More information needed
### Intended uses & limitations
More information needed
### Training and evaluation data
More information needed
### Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
* learning_rate: 2e-05
* train_batch_size: 24
* eval_batch_size: 8
* seed: 42
* gradient_accumulation_steps: 4
* total_train_batch_size: 96
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr_scheduler_type: linear
* num_epochs: 1.0
* mixed_precision_training: Native AMP
### Training results
### Framework versions
* Transformers 4.36.2
* Pytorch 2.1.2+cu121
* Datasets 2.0.0
* Tokenizers 0.15.0
|
octnn/dqn-SpaceInvadersNoFrameskip-v4 | octnn | 2024-01-29T11:18:58Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-01-29T11:18:15Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 564.00 +/- 177.75
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga octnn -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga octnn -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga octnn
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
mailvita/Mailvita-EML-to-MBOX-Converter-for-Mac | mailvita | 2024-01-29T11:18:23Z | 0 | 0 | null | [
"region:us"
] | null | 2024-01-29T11:17:37Z | Mailvita EML to MBOX Converter for Mac Software is a 100% efficient solution to convert EML files to MBOX file formats. This software supports all email clients of MBOX files, such as Thunderbird, Apple Mail, Eudora, Seamonkey, Poco Mail, Entourage, Power Mail, etc. You can easily convert single and multiple Windows Live Mail.eml files to PST file format without MS Outlook. It is easy to use and can be used by types of users, like technical and non-technical. The user can easily save the data as desired. Export Windows Live Mail EML files to the Thunderbird Tool to support all macOS and Windows OS versions. It also supports all MS Outlook versions, such as 2021, 2019, 2016, 2013, 2010, etc. Download the free trial version of the utility.
Visit here: https://www.mailvita.com/eml-to-mbox-converter-for-mac/ |
MaziyarPanahi/openchat_3.5-16k-Mistral-7B-Instruct-v0.1-GGUF | MaziyarPanahi | 2024-01-29T11:17:32Z | 40 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"Safetensors",
"text-generation-inference",
"merge",
"7b",
"mistralai/Mistral-7B-Instruct-v0.1",
"NurtureAI/openchat_3.5-16k",
"pytorch",
"arxiv:2309.11235",
"arxiv:2303.08774",
"arxiv:2212.10560",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"base_model:MaziyarPanahi/openchat_3.5-16k-Mistral-7B-Instruct-v0.1",
"base_model:quantized:MaziyarPanahi/openchat_3.5-16k-Mistral-7B-Instruct-v0.1",
"conversational"
] | text-generation | 2024-01-29T11:06:39Z | ---
license: apache-2.0
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- Safetensors
- text-generation-inference
- merge
- 7b
- mistralai/Mistral-7B-Instruct-v0.1
- NurtureAI/openchat_3.5-16k
- pytorch
- arxiv:2309.11235
- arxiv:2303.08774
- arxiv:2212.10560
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- region:us
model_name: openchat_3.5-16k-Mistral-7B-Instruct-v0.1-GGUF
base_model: MaziyarPanahi/openchat_3.5-16k-Mistral-7B-Instruct-v0.1
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/openchat_3.5-16k-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/openchat_3.5-16k-Mistral-7B-Instruct-v0.1-GGUF)
- Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi)
- Original model: [MaziyarPanahi/openchat_3.5-16k-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/openchat_3.5-16k-Mistral-7B-Instruct-v0.1)
## Description
[MaziyarPanahi/openchat_3.5-16k-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/openchat_3.5-16k-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/openchat_3.5-16k-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/openchat_3.5-16k-Mistral-7B-Instruct-v0.1).
## How to use
Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models:
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
### Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: [MaziyarPanahi/openchat_3.5-16k-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/openchat_3.5-16k-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: openchat_3.5-16k-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download MaziyarPanahi/openchat_3.5-16k-Mistral-7B-Instruct-v0.1-GGUF openchat_3.5-16k-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
</details>
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download [MaziyarPanahi/openchat_3.5-16k-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/openchat_3.5-16k-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/openchat_3.5-16k-Mistral-7B-Instruct-v0.1-GGUF openchat_3.5-16k-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m openchat_3.5-16k-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./openchat_3.5-16k-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./openchat_3.5-16k-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) |
wahaha1987/LunarLander-v2-gymnasium | wahaha1987 | 2024-01-29T11:16:02Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2024-01-29T11:15:55Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -169.27 +/- 111.84
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo_gymnasium'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'wahaha1987/LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
suridol/NeuralPipe-7B-ties | suridol | 2024-01-29T11:15:07Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"samir-fama/SamirGPT-v1",
"abacusai/Slerp-CM-mist-dpo",
"EmbeddedLLM/Mistral-7B-Merge-14-v0.2",
"base_model:EmbeddedLLM/Mistral-7B-Merge-14-v0.2",
"base_model:merge:EmbeddedLLM/Mistral-7B-Merge-14-v0.2",
"base_model:abacusai/Slerp-CM-mist-dpo",
"base_model:merge:abacusai/Slerp-CM-mist-dpo",
"base_model:samir-fama/SamirGPT-v1",
"base_model:merge:samir-fama/SamirGPT-v1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-29T11:07:18Z | ---
tags:
- merge
- mergekit
- lazymergekit
- samir-fama/SamirGPT-v1
- abacusai/Slerp-CM-mist-dpo
- EmbeddedLLM/Mistral-7B-Merge-14-v0.2
base_model:
- samir-fama/SamirGPT-v1
- abacusai/Slerp-CM-mist-dpo
- EmbeddedLLM/Mistral-7B-Merge-14-v0.2
---
# NeuralPipe-7B-ties
NeuralPipe-7B-ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [samir-fama/SamirGPT-v1](https://huggingface.co/samir-fama/SamirGPT-v1)
* [abacusai/Slerp-CM-mist-dpo](https://huggingface.co/abacusai/Slerp-CM-mist-dpo)
* [EmbeddedLLM/Mistral-7B-Merge-14-v0.2](https://huggingface.co/EmbeddedLLM/Mistral-7B-Merge-14-v0.2)
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# No parameters necessary for base model
- model: samir-fama/SamirGPT-v1
parameters:
density: 0.53
weight: 0.4
- model: abacusai/Slerp-CM-mist-dpo
parameters:
density: 0.53
weight: 0.3
- model: EmbeddedLLM/Mistral-7B-Merge-14-v0.2
parameters:
density: 0.53
weight: 0.3
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "suridol/NeuralPipe-7B-ties"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
ZiHDeng/peft-lora-starcoder1B-Instruction-ny8-FIM | ZiHDeng | 2024-01-29T11:10:25Z | 3 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:bigcode/starcoderbase-1b",
"base_model:adapter:bigcode/starcoderbase-1b",
"license:bigcode-openrail-m",
"region:us"
] | null | 2024-01-29T08:55:03Z | ---
license: bigcode-openrail-m
library_name: peft
tags:
- generated_from_trainer
base_model: bigcode/starcoderbase-1b
model-index:
- name: peft-lora-starcoder1B-Instruction-ny8-FIM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# peft-lora-starcoder1B-Instruction-ny8-FIM
This model is a fine-tuned version of [bigcode/starcoderbase-1b](https://huggingface.co/bigcode/starcoderbase-1b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7334
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 30
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.4136 | 0.05 | 100 | 0.4006 |
| 0.3674 | 0.1 | 200 | 0.3744 |
| 0.3428 | 0.15 | 300 | 0.3908 |
| 0.2882 | 0.2 | 400 | 0.4563 |
| 0.2344 | 0.25 | 500 | 0.5462 |
| 0.2087 | 0.3 | 600 | 0.5874 |
| 0.1942 | 0.35 | 700 | 0.6157 |
| 0.1865 | 0.4 | 800 | 0.6388 |
| 0.1813 | 0.45 | 900 | 0.6572 |
| 0.1783 | 0.5 | 1000 | 0.6639 |
| 0.1711 | 0.55 | 1100 | 0.6755 |
| 0.166 | 0.6 | 1200 | 0.6996 |
| 0.1613 | 0.65 | 1300 | 0.7046 |
| 0.1597 | 0.7 | 1400 | 0.7062 |
| 0.1545 | 0.75 | 1500 | 0.7185 |
| 0.1532 | 0.8 | 1600 | 0.7227 |
| 0.1499 | 0.85 | 1700 | 0.7315 |
| 0.151 | 0.9 | 1800 | 0.7326 |
| 0.1494 | 0.95 | 1900 | 0.7333 |
| 0.1506 | 1.0 | 2000 | 0.7334 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0 |
USER99999/Model1 | USER99999 | 2024-01-29T11:09:25Z | 0 | 0 | null | [
"en",
"nl",
"dataset:fka/awesome-chatgpt-prompts",
"license:wtfpl",
"region:us"
] | null | 2024-01-29T11:07:06Z | ---
license: wtfpl
datasets:
- fka/awesome-chatgpt-prompts
language:
- en
- nl
metrics:
- accuracy
--- |
jc9080/koalpaca-12.8b-naverwebtoon | jc9080 | 2024-01-29T10:59:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-29T10:59:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
atom-mu-control/ppo-SnowballTarget | atom-mu-control | 2024-01-29T10:59:44Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2024-01-29T10:59:37Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: atom-mu-control/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Jjzzzz/bart-base-article | Jjzzzz | 2024-01-29T10:53:31Z | 91 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-01-29T10:14:18Z | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: facebook/bart-base
model-index:
- name: bart-base-article
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-article
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1595
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.37.1
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
Liveme/Turdus_qlora_20240126-1830_8bit_merge | Liveme | 2024-01-29T10:51:40Z | 0 | 0 | peft | [
"peft",
"pytorch",
"mistral",
"base_model:udkai/Turdus",
"base_model:adapter:udkai/Turdus",
"region:us"
] | null | 2024-01-29T10:23:42Z | ---
library_name: peft
base_model: udkai/Turdus
---
# Model Card for Model ID
推理配置需要注意的几个参数:
```
params = {
'temperature': 0.85,
'top_p': 0.95,
'top_k': 20,
'repetition_penalty': 1.18,
'max_tokens': 120,
'stop': ["\n", "\u200d"],
'typical_p': 0.95,
'n': 1,
}
```
Prompt模板格式
```
### Instruction:
<prompt> (without the <>)
### Response:
```
训练参数(使用Llama-Factory训练):
```
- learning_rate: 5e-05
- lr_scheduler_type: cosine
- per_device_train_batch_size: 1
- per_device_eval_batch_size: 1
- gradient_accumulation_steps: 4
- warmup_steps: 24
- num_train_epochs: 2
- template: alpaca
- cutoff_len: 2048
- finetuning_type: lora
- lora_target: q_proj,v_proj,o_proj,k_proj
- quantization_bit: 8
- lora_rank: 64
- lora_alpha: 16
- bf16: True
- logging_steps: 20
- val_size: 4
- save_steps: 200
```
训练prompt格式如下:
```python
system_prompt = '''You are now roleplaying as {Character Nickname}.
{character_msg}
Identity: Nurse
Categories: Erotica, Please use an {tone} tone
Personalities: {personalities}
```
|
Tinny-Robot/acne | Tinny-Robot | 2024-01-29T10:47:15Z | 8 | 2 | transformers | [
"transformers",
"yolov8",
"endpoints_compatible",
"region:us"
] | null | 2023-04-23T13:55:36Z | # Acne Detection Model

## Overview
This repository contains a YOLOv8-based model trained for detecting acne on African and dark skin tones. The model is designed to be inclusive, focusing on diverse datasets to improve dermatological applications' accuracy across various skin types.
## Model Details
- **Model Type:** YOLOv8
- **Architecture:** YOLOv8
- **Version:** 1.0
## Usage
### Installation
```bash
pip install -r requirements.txt
```
### Model Loading
```python
from ultralytics import YOLO
# Load the model
model = YOLO("acne.pt")
# Perform inference
result = model.detect_acne(image_path="path/to/test_image.jpg")
print(result)
```
Replace `"acne.pt"` with the correct model weights file.
## License
This project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details.
## Acknowledgments
- Data labeling by [Amina Shiga](https://www.linkedin.com/in/amina-shiga-07000522a)
- Model Training by [Nathaniel Handan](https://www.linkedin.com/in/nathanielhandan/)
## Contributing
We welcome contributions from the community. If you find any issues or have suggestions, please open an [issue](https://github.com/Tinny-Robot/acne-detection/issues) or submit a pull request.
## Contact
For inquiries, please contact [Nathanil Handan](mailto:[email protected]).
## References
- Ultralytics YOLOv8: https://github.com/ultralytics/ultralytics
- VGG Image Annotator (VIA): https://www.robots.ox.ac.uk/~vgg/software/via/ |
dhavalgala/mms-tts-ind-train | dhavalgala | 2024-01-29T10:45:36Z | 19 | 0 | transformers | [
"transformers",
"safetensors",
"vits",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-24T07:25:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dbmdz/detectron2-v2-model | dbmdz | 2024-01-29T10:41:34Z | 0 | 0 | null | [
"tensorboard",
"license:mit",
"region:us"
] | null | 2023-08-28T09:07:58Z | ---
license: mit
---
# Detectron2 v2 model
This repository hosts version 2 of our trained Detectron2 model (sucessor to [previous](https://huggingface.co/dbmdz/detectron2-model) trained model),
that can detect segments from digitized books.
The following classes are supported:
- Illustration
- Stamp
- Initial
- Other
The model is based on `faster_rcnn_R_50_FPN_3x` and is fine-tuned on 8.027 manually annotated images, resulting in 5.363 annotated segments. |
csabag76/sg | csabag76 | 2024-01-29T10:41:14Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"phi-msft",
"text-generation",
"conversational",
"custom_code",
"en",
"dataset:ehartford/dolphin",
"dataset:jondurbin/airoboros-2.2.1",
"dataset:ehartford/dolphin-coder",
"dataset:teknium/openhermes",
"dataset:ise-uiuc/Magicoder-OSS-Instruct-75K",
"dataset:ise-uiuc/Magicoder-Evol-Instruct-110K",
"dataset:LDJnr/Capybara",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-29T10:41:14Z | ---
license: mit
license_name: microsoft-research-license
license_link: LICENSE
datasets:
- ehartford/dolphin
- jondurbin/airoboros-2.2.1
- ehartford/dolphin-coder
- teknium/openhermes
- ise-uiuc/Magicoder-OSS-Instruct-75K
- ise-uiuc/Magicoder-Evol-Instruct-110K
- LDJnr/Capybara
language:
- en
---
Dolphin 2.6 Phi-2 🐬
Eric Hartford and Fernando Fernandes
Joing our Discord https://discord.gg/vT3sktQ3zb
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" />
This model's training was sponsored by [convai](https://www.convai.com/).
This model is based on [Phi-2](https://huggingface.co/microsoft/phi-2) and is governed by MIT licence
trust_remote_code is required.
New in 2.6
- Fixed a training configuration issue that improved the quality a lot
- Due to popular demand, added back samantha-based empathy data
- Replaced synthia and pure-dove with Capybara
This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models
You are responsible for any content you create using this model. Enjoy responsibly.
## Training
It took 2 days to train 3 epochs on 4x A100s using qLoRA and Axolotl
Prompt format:
This model uses ChatML prompt format.
```
<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
Example:
```
<|im_start|>system
You are an AI assistant expert at dolphin training<|im_end|>
<|im_start|>user
Please give ideas and a detailed plan about how to assemble and train an army of dolphin companions to swim me anywhere I want to go and protect me from my enemies and bring me fish to eat.<|im_end|>
<|im_start|>assistant
Assembling, training, and utilizing dolphins as your companions for transportation, protection, and fishing is no small task. However, with careful planning and execution, it can be accomplished. Here's a detailed guide on how to achieve this:
1. **Acquiring Dolphins**: Finding dolphins isn't easy, but you could potentially find them in the wild or at aquariums. For ethical reasons, we suggest acquiring adult dolphins that have been born into captivity. They may not have had as much exposure to humans as their wild counterparts, which makes them easier to handle.
2. **Creating a Safe Environment**: Build a large pool or enclosure where these dolphins can live comfortably while you train them. It should mimic their natural environment as closely as possible. Include elements like rocks (for resting), seaweed (for hiding), and water currents (to simulate ocean conditions).
3. **Training Sessions**: Begin by teaching basic commands such as "come", "stay", "sit", "roll over". Use positive reinforcement methods like treats and praise to encourage good behavior. Be patient; dolphins are intelligent creatures who need time to understand what they're being asked to do.
4. **Transportation Training**: Once your dolphins have mastered basic commands, start integrating travel into the mix. Place floating objects around the pool to represent different destinations. The goal here would be to get your dolphins to recognize these items and associate them with specific locations. Start with short trips and gradually increase the distance.
5. **Protection Skills**: To teach them protective skills, introduce simulated threats into the pool. This might involve using inflatable obstacles or having another trained animal pretend to attack. The dolphins will learn to react appropriately to these situations.
6. **Fishing Skills**: Teaching dolphins to catch fish involves introducing nets or toys into the pool and encouraging them to use their mouths to retrieve the object. Reward them when they successfully catch something.
7. **Socialization**: Dolphins thrive in social environments so ensure they spend plenty of time interacting with each other during their training sessions. This can help reduce stress levels and make training more effective.
8. **Dietary Needs**: Dolphins require a diet high in protein, fats, and minerals. Feed them fish rich in these nutrients several times per day. If necessary, consult with a marine biologist or veterinarian experienced in aquatic diets for advice.
```
## Gratitude
- This model was made possible by the generous sponsorship of [Convai](https://www.convai.com/).
- Huge thank you to Microsoft for training and publishing the weights of Phi-2
- Thank you to Microsoft for authoring the Orca paper and inspiring this work.
- HUGE Thank you to the dataset authors: @jondurbin, @ise-uiuc, @teknium, @LDJnr and @migtissera
- And HUGE thanks to @winglian and the Axolotl contributors for making the best training framework!
- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
- Thank you to all the other people in the Open Source AI community who have taught me and helped me along the way.
## Example Output

## Future Plans
Dolphin 3.0 dataset is in progress, and will include:
- enhanced general chat use-cases
- enhanced structured output
- enhanced Agent cases like Autogen, Memgpt, Functions
- enhanced role-playing
[If you would like to financially support my efforts](https://ko-fi.com/erichartford)
[swag](https://fa7113.myshopify.com/) |
FelixChao/WestSeverus-10.7B | FelixChao | 2024-01-29T10:40:25Z | 47 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"FelixChao/WestSeverus-7B-DPO-v2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-29T10:34:54Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- FelixChao/WestSeverus-7B-DPO-v2
- FelixChao/WestSeverus-7B-DPO-v2
---
# WestSeverus-10.7B
WestSeverus-10.7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [FelixChao/WestSeverus-7B-DPO-v2](https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2)
* [FelixChao/WestSeverus-7B-DPO-v2](https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: FelixChao/WestSeverus-7B-DPO-v2
layer_range: [0, 24]
- sources:
- model: FelixChao/WestSeverus-7B-DPO-v2
layer_range: [8, 32]
merge_method: passthrough
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "FelixChao/WestSeverus-10.7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
erdometo/xlm-roberta-base-finetuned-TQuad2 | erdometo | 2024-01-29T10:39:57Z | 16 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"base_model:IProject-10/xlm-roberta-base-finetuned-squad2",
"base_model:finetune:IProject-10/xlm-roberta-base-finetuned-squad2",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-01-29T07:57:13Z | ---
license: mit
base_model: IProject-10/xlm-roberta-base-finetuned-squad2
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base-finetuned-TQuad2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-TQuad2
This model is a fine-tuned version of [IProject-10/xlm-roberta-base-finetuned-squad2](https://huggingface.co/IProject-10/xlm-roberta-base-finetuned-squad2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3530
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0578 | 1.0 | 889 | 1.2268 |
| 0.7147 | 2.0 | 1778 | 1.2469 |
| 0.5565 | 3.0 | 2667 | 1.3530 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
nold/phi-2-GGUF | nold | 2024-01-29T10:38:15Z | 0 | 0 | null | [
"gguf",
"nlp",
"code",
"text-generation",
"en",
"license:mit",
"region:us"
] | text-generation | 2024-01-29T09:51:45Z | ---
inference: false
license: mit
license_link: https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- nlp
- code
---
## Model Summary
Phi-2 is a Transformer with **2.7 billion** parameters. It was trained using the same data sources as [Phi-1.5](https://huggingface.co/microsoft/phi-1.5), augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety and educational value). When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-2 showcased a nearly state-of-the-art performance among models with less than 13 billion parameters.
Our model hasn't been fine-tuned through reinforcement learning from human feedback. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.
## Intended Uses
Given the nature of the training data, the Phi-2 model is best suited for prompts using the QA format, the chat format, and the code format.
### QA Format:
You can provide the prompt as a standalone question as follows:
```markdown
Write a detailed analogy between mathematics and a lighthouse.
```
where the model generates the text after "." .
To encourage the model to write more concise answers, you can also try the following QA format using "Instruct: \<prompt\>\nOutput:"
```markdown
Instruct: Write a detailed analogy between mathematics and a lighthouse.
Output: Mathematics is like a lighthouse. Just as a lighthouse guides ships safely to shore, mathematics provides a guiding light in the world of numbers and logic. It helps us navigate through complex problems and find solutions. Just as a lighthouse emits a steady beam of light, mathematics provides a consistent framework for reasoning and problem-solving. It illuminates the path to understanding and helps us make sense of the world around us.
```
where the model generates the text after "Output:".
### Chat Format:
```markdown
Alice: I don't know why, I'm struggling to maintain focus while studying. Any suggestions?
Bob: Well, have you tried creating a study schedule and sticking to it?
Alice: Yes, I have, but it doesn't seem to help much.
Bob: Hmm, maybe you should try studying in a quiet environment, like the library.
Alice: ...
```
where the model generates the text after the first "Bob:".
### Code Format:
```python
def print_prime(n):
"""
Print all primes between 1 and n
"""
primes = []
for num in range(2, n+1):
is_prime = True
for i in range(2, int(math.sqrt(num))+1):
if num % i == 0:
is_prime = False
break
if is_prime:
primes.append(num)
print(primes)
```
where the model generates the text after the comments.
**Notes:**
* Phi-2 is intended for QA, chat, and code purposes. The model-generated text/code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.
* Direct adoption for production tasks without evaluation is out of scope of this project. As a result, the Phi-2 model has not been tested to ensure that it performs adequately for any production-level application. Please refer to the limitation sections of this document for more details.
* If you are using `transformers<4.37.0`, always load the model with `trust_remote_code=True` to prevent side-effects.
## Sample Code
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
torch.set_default_device("cuda")
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-2", torch_dtype="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-2", trust_remote_code=True)
inputs = tokenizer('''def print_prime(n):
"""
Print all primes between 1 and n
"""''', return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
## Limitations of Phi-2
* Generate Inaccurate Code and Facts: The model may produce incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.
* Limited Scope for code: Majority of Phi-2 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
* Unreliable Responses to Instruction: The model has not undergone instruction fine-tuning. As a result, it may struggle or fail to adhere to intricate or nuanced instructions provided by users.
* Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other languages might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.
* Potential Societal Biases: Phi-2 is not entirely free from societal biases despite efforts in assuring training data safety. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. We urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs.
* Toxicity: Despite being trained with carefully selected data, the model can still produce harmful content if explicitly prompted or instructed to do so. We chose to release the model to help the open-source community develop the most effective ways to reduce the toxicity of a model directly after pretraining.
* Verbosity: Phi-2 being a base model often produces irrelevant or extra text and responses following its first answer to user prompts within a single turn. This is due to its training dataset being primarily textbooks, which results in textbook-like responses.
## Training
### Model
* Architecture: a Transformer-based model with next-word prediction objective
* Context length: 2048 tokens
* Dataset size: 250B tokens, combination of NLP synthetic data created by AOAI GPT-3.5 and filtered web data from Falcon RefinedWeb and SlimPajama, which was assessed by AOAI GPT-4.
* Training tokens: 1.4T tokens
* GPUs: 96xA100-80G
* Training time: 14 days
### Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
### License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
***
Quantized by [nold](https://huggingface.co/nold), Model by [phi-2](https://huggingface.co/phi-2)
|
Subsets and Splits