modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-25 06:27:54
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 495
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-25 06:24:22
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
RichardErkhov/ZySec-AI_-_ZySec-7B-gguf | RichardErkhov | 2024-05-18T09:07:16Z | 19 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-18T07:47:16Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
ZySec-7B - GGUF
- Model creator: https://huggingface.co/ZySec-AI/
- Original model: https://huggingface.co/ZySec-AI/ZySec-7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [ZySec-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_ZySec-7B-gguf/blob/main/ZySec-7B.Q2_K.gguf) | Q2_K | 2.53GB |
| [ZySec-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_ZySec-7B-gguf/blob/main/ZySec-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [ZySec-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_ZySec-7B-gguf/blob/main/ZySec-7B.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [ZySec-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_ZySec-7B-gguf/blob/main/ZySec-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [ZySec-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_ZySec-7B-gguf/blob/main/ZySec-7B.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [ZySec-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_ZySec-7B-gguf/blob/main/ZySec-7B.Q3_K.gguf) | Q3_K | 3.28GB |
| [ZySec-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_ZySec-7B-gguf/blob/main/ZySec-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [ZySec-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_ZySec-7B-gguf/blob/main/ZySec-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [ZySec-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_ZySec-7B-gguf/blob/main/ZySec-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [ZySec-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_ZySec-7B-gguf/blob/main/ZySec-7B.Q4_0.gguf) | Q4_0 | 3.83GB |
| [ZySec-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_ZySec-7B-gguf/blob/main/ZySec-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [ZySec-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_ZySec-7B-gguf/blob/main/ZySec-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [ZySec-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_ZySec-7B-gguf/blob/main/ZySec-7B.Q4_K.gguf) | Q4_K | 4.07GB |
| [ZySec-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_ZySec-7B-gguf/blob/main/ZySec-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [ZySec-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_ZySec-7B-gguf/blob/main/ZySec-7B.Q4_1.gguf) | Q4_1 | 4.24GB |
| [ZySec-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_ZySec-7B-gguf/blob/main/ZySec-7B.Q5_0.gguf) | Q5_0 | 4.65GB |
| [ZySec-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_ZySec-7B-gguf/blob/main/ZySec-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [ZySec-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_ZySec-7B-gguf/blob/main/ZySec-7B.Q5_K.gguf) | Q5_K | 4.78GB |
| [ZySec-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_ZySec-7B-gguf/blob/main/ZySec-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [ZySec-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_ZySec-7B-gguf/blob/main/ZySec-7B.Q5_1.gguf) | Q5_1 | 5.07GB |
| [ZySec-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_ZySec-7B-gguf/blob/main/ZySec-7B.Q6_K.gguf) | Q6_K | 5.53GB |
| [ZySec-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_ZySec-7B-gguf/blob/main/ZySec-7B.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
library_name: transformers
license: apache-2.0
tags:
- security
- cybersecwithai
- threat
- vulnerability
- infosec
- zysec.ai
- cyber security
- ai4security
- llmsecurity
- cyber
- malware analysis
- exploitdev
- ai4good
- aisecurity
- threat
- cybersec
- cybersecurity
---
# ZySec-7B
ZySec-7B, stands as a pivotal innovation for security professionals, leveraging the advanced capabilities of HuggingFace's Zephyr language model series. This AI model is crafted to be an omnipresent cybersecurity ally, offering on-demand, expert guidance in cybersecurity issues. Picture ZySec-7B as an ever-present digital teammate, adept at navigating the complexities of security challenges.
The efficacy of ZySec-7B lies in its comprehensive training across numerous cybersecurity fields, providing a deep and wide-ranging understanding of the sector. ZySec is developed using the DPO technique, utilizing a varied dataset encompassing critical topics such as:
- Sophisticated areas like Attack Surface Threats, Cloud Security, and the Cyber Kill Chain.
- Key compliance and regulatory frameworks, including CIS Controls, FedRAMP, PCI DSS, and ISO/IEC 27001.
- Practical aspects like Cloud Secure Migration, Data Exfiltration Techniques, and Security Incident Handling.
- Crucial strategic fields such as Security Governance, Risk Management, and Security Architecture Review.
ZySec-7B's training spans over 30 unique domains, each enriched with thousands of data points, delivering unparalleled expertise.
As the first of its kind in an open-source, AI-driven cybersecurity series, ZySec-7B transcends the conventional role of a support tool, redefining organizational security approaches. Its open-source nature not only invites community contributions but also enhances its flexibility and transparency in managing vast cybersecurity data. ZySec-7B is instrumental in providing vital, actionable insights for strategic decision-making and advanced risk management. More than a mere software, ZySec-7B is a community-enhanced strategic tool, equipping your team to proactively confront and stay ahead of the dynamic landscape of cyber threats and regulatory demands.
# For suggestions please use [Road Map](https://zysec-ai.productlift.dev/t/roadmap)
<img src="https://huggingface.co/aihub-app/ZySec-7B-v1/resolve/main/ZySec-7B-dataset-composition.png?download=true" alt="Dataset Distribution" width="90%"/>
Details of dataset distribution here - [Dataset Distribution](https://huggingface.co/aihub-app/ZySec-7B/resolve/main/ZySec-7B-dataset-composition.png?download=true)
Fully compatible with [LM Studio](https://lmstudio.ai). Search for “Zysec” and here is what you get. Here is a sample output of ZySec writing email to John about database security using LM Studio:
<img src="https://huggingface.co/aihub-app/ZySec-7B-v1/resolve/main/sample-output.png" alt="Sample Output" width="90%"/>
---
The training is funded by [AttackIO](https://www.attackio.app), the mobile app for Cyber Security professionals.
Official GGUF version is hosted here - [ZySec-7B-v1-GGUF on HuggingFace](https://huggingface.co/aihub-app/ZySec-7B-v1-GGUF)
## [ZySec AI: Unleashing the Potential of the ZySec Series Model](https://github.com/ZySec-AI/ZySec)
Project ZySec, an integral part of ZySec AI, stands at the forefront of integrating Artificial Intelligence into Cybersecurity. Centered around the innovative ZySec 7B model, it's designed to revolutionize the cybersecurity landscape with AI-driven solutions. ZySec AI isn't just a tool, it's a transformative approach, blending AI's cutting-edge capabilities with the unique intricacies of cybersecurity, while ensuring privacy and security.
### Discover the Key Features of Project ZySec
- **AI-Driven Cybersecurity:** Tap into the power of the ZySec 7B model, a bespoke AI solution fine-tuned for cybersecurity.
- **24/7 Expert Assistance:** Benefit from round-the-clock support and expert advice, guaranteeing smooth operations during any SOC shift.
- **Efficient Playbook Access:** Streamline your workflow with quick and easy access to playbooks and documents, enhancing information retrieval.
- **Standards Explorer:** Navigate various standards with ease, akin to a seasoned expert's proficiency.
- **Ongoing Internet Research:** Leverage AI-enabled, thorough internet research for exhaustive insights. (Note: Internet use is optional and specific to this feature).
### About Project ZySec by ZySec AI
ZySec AI an opensource project with a vision towards fusioning of Cybersecurity with Artificial Intelligence. Our goal is to transform the way security professionals engage with technology. More than a mere tool, ZySec AI symbolizes a comprehensive strategy to augment security operations, merging the innovative essence of AI with cybersecurity's distinctive challenges, always ensuring privacy and security.
https://github.com/ZySec-AI/ZySec
### The ZySec Roadmap
https://github.com/ZySec-AI/.github/blob/main/roadmap.md
|
omarelsayeed/Jobs_Intra_Category_setfit2 | omarelsayeed | 2024-05-18T09:04:55Z | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-05-18T09:02:03Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 150 with parameters:
```
{'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`__main__.LoggingBAS`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 30, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
mshamrai/ppo-LunarLander-v2 | mshamrai | 2024-05-18T08:53:54Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-18T08:53:20Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 269.05 +/- 7.49
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
yzhuang/Meta-Llama-3-8B-Instruct_fictional_arc_challenge_Spanish_v2 | yzhuang | 2024-05-18T08:52:49Z | 8 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:generator",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-17T09:28:56Z | ---
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
model-index:
- name: Meta-Llama-3-8B-Instruct_fictional_arc_challenge_Spanish_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/yufanz/autotree/runs/7283970144.51595-887226ef-9076-4284-993d-3e22f4763aa6)
# Meta-Llama-3-8B-Instruct_fictional_arc_challenge_Spanish_v2
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 36
### Training results
### Framework versions
- Transformers 4.41.0
- Pytorch 2.1.0a0+32f93b1
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Statuo/LemonKunoichiWizardv3_4bpw | Statuo | 2024-05-18T08:51:39Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:2203.05482",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"exl2",
"region:us"
] | text-generation | 2024-05-18T08:47:06Z | ---
{}
---
# Lemon Kunoichi Wizard - 7b

[Base Model](https://huggingface.co/Statuo/LemonKunoichiWizardV3/), [4bpw](https://huggingface.co/Statuo/LemonKunoichiWizardv3_4bpw), [6bpw](https://huggingface.co/Statuo/LemonKunoichiWizardv3_6bpw), [8bpw](https://huggingface.co/Statuo/LemonKunoichiWizardv3_8bpw)
The Quanted versions come with the measurement files in case you want to do your own quants.
A merge of three models, LemonadeRP-4.5.3, Kunoichi-DPO-v2, and WizardLM-2. I used Lemonade as a base with Kunoichi being the second biggest influence and WizardLM-2 for logic capabilities.
The end result is a Roleplay-focused model with great character card inference. I ran 4 merges at varying values to see which provided the most accurate output to a character cards quirk, with this v3 version being the winner out of the four.
## Context Template - Alpaca
Alpaca preset seems to work well with your own System Prompt.
## Context Size - 8192
The model loads at 8192 on my end, but theoretically it should be able to go up to 32k. Not that it'll be coherent at 32k. Most models based on Mistral like this end up being - at best - 12k context size for coherent output. I only tested at 8k which is where the base models tend to shine. YMMV otherwise.
---
base_model:
- SanjiWatsuki/Kunoichi-DPO-v2-7B
- dreamgen/WizardLM-2-7B
- KatyTheCutie/LemonadeRP-4.5.3
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
* [dreamgen/WizardLM-2-7B](https://huggingface.co/dreamgen/WizardLM-2-7B)
* [KatyTheCutie/LemonadeRP-4.5.3](https://huggingface.co/KatyTheCutie/LemonadeRP-4.5.3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: KatyTheCutie/LemonadeRP-4.5.3
parameters:
weight: 1.0
- model: dreamgen/WizardLM-2-7B
parameters:
weight: 0.2
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
parameters:
weight: 0.6
merge_method: linear
dtype: float16
``` |
Prajwalll/whisper-small-te | Prajwalll | 2024-05-18T08:45:57Z | 118 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"te",
"dataset:mozilla-foundation/common_voice_17_0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-18T07:58:48Z | ---
language:
- te
base_model: openai/whisper-small-te
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_17_0
metrics:
- wer
model-index:
- name: Whisper small Te sample - Prajwal Nagaraj
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 17.0
type: mozilla-foundation/common_voice_17_0
config: te
split: None
args: 'config: te, split: test'
metrics:
- name: Wer
type: wer
value: 87.36263736263736
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper small Te sample - Prajwal Nagaraj
This model is a fine-tuned version of [openai/whisper-small-te](https://huggingface.co/openai/whisper-small-te) on the Common Voice 17.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7139
- Wer: 87.3626
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:-------:|
| 0.0001 | 71.4286 | 500 | 0.7139 | 87.3626 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
rajivmehtapy/llama3_text2cypher_recommendations | rajivmehtapy | 2024-05-18T08:44:51Z | 0 | 0 | null | [
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2024-05-18T08:33:02Z | ---
license: apache-2.0
---
|
abdulmalek9/Llama3-8b_model | abdulmalek9 | 2024-05-18T08:39:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-18T08:39:45Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** abdulmalek9
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
KenanKhan/my-multi-view-diffusion | KenanKhan | 2024-05-18T08:39:18Z | 0 | 0 | null | [
"image-to-3d",
"arxiv:2312.02201",
"license:openrail",
"region:us"
] | image-to-3d | 2024-05-18T08:23:02Z | ---
license: openrail
pipeline_tag: image-to-3d
---
This is a copy of [ashawkey/imagedream-ipmv-diffusers](https://huggingface.co/ashawkey/imagedream-ipmv-diffusers).
It is hosted here for persistence throughout the ML for 3D course.
# MVDream-diffusers Model Card
This is a port of https://huggingface.co/Peng-Wang/ImageDream into diffusers.
For usage, please check: https://github.com/ashawkey/mvdream_diffusers
## Citation
```
@article{wang2023imagedream,
title={ImageDream: Image-Prompt Multi-view Diffusion for 3D Generation},
author={Wang, Peng and Shi, Yichun},
journal={arXiv preprint arXiv:2312.02201},
year={2023}
}
```
## Misuse, Malicious Use, and Out-of-Scope Use
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
|
colesimmons/xlm-roberta-sumerian-glyphs | colesimmons | 2024-05-18T08:39:14Z | 165 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-17T16:38:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Bernadette16/ft-wav2vec2-with-minds-asr | Bernadette16 | 2024-05-18T08:38:04Z | 81 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-17T16:40:13Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: ft-wav2vec2-with-minds-asr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ft-wav2vec2-with-minds-asr
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2186
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- training_steps: 0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| No log | 0.2 | 20 | 3.7561 | 1.0 |
| 8.7776 | 0.4 | 40 | 3.2186 | 1.0 |
| 3.2979 | 0.6 | 60 | 3.1543 | 1.0 |
| 3.2979 | 0.8 | 80 | 3.1295 | 1.0 |
| 3.1761 | 1.0 | 100 | 3.1033 | 1.0 |
| 3.1708 | 1.2 | 120 | 3.1019 | 1.0 |
| 3.1708 | 1.4 | 140 | 3.0894 | 1.0 |
| 3.0608 | 1.6 | 160 | 3.0664 | 1.0 |
| 3.0686 | 1.8 | 180 | 3.0616 | 1.0 |
| 3.0686 | 2.0 | 200 | 3.0622 | 1.0 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.3.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.2
|
AlishbaZ/corgy_dog_LoRA | AlishbaZ | 2024-05-18T08:34:34Z | 1 | 1 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"dora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-09T09:16:37Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- dora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- dora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- dora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of TOK dog
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - AlishbaZ/corgy_dog_LoRA
<Gallery />
## Model description
These are AlishbaZ/corgy_dog_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK dog to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](AlishbaZ/corgy_dog_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
raftrsf/pair_pref | raftrsf | 2024-05-18T08:13:45Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-18T07:48:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Sayyed777/Sss | Sayyed777 | 2024-05-18T08:11:24Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-18T08:08:03Z | ---
license: apache-2.0
---
|
shkna1368/mt5-base-finetuned-mt5-base-poem4Final | shkna1368 | 2024-05-18T08:03:38Z | 115 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/mt5-base",
"base_model:finetune:google/mt5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-18T06:42:42Z | ---
license: apache-2.0
base_model: google/mt5-base
tags:
- generated_from_trainer
model-index:
- name: mt5-base-finetuned-mt5-base-poem4Final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-mt5-base-poem4Final
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 121 | nan |
| No log | 2.0 | 242 | nan |
| No log | 3.0 | 363 | nan |
| No log | 4.0 | 484 | nan |
| 0.0 | 5.0 | 605 | nan |
| 0.0 | 6.0 | 726 | nan |
| 0.0 | 7.0 | 847 | nan |
| 0.0 | 8.0 | 968 | nan |
| 0.0 | 9.0 | 1089 | nan |
| 0.0 | 10.0 | 1210 | nan |
| 0.0 | 11.0 | 1331 | nan |
| 0.0 | 12.0 | 1452 | nan |
| 0.0 | 13.0 | 1573 | nan |
| 0.0 | 14.0 | 1694 | nan |
| 0.0 | 15.0 | 1815 | nan |
| 0.0 | 16.0 | 1936 | nan |
| 0.0 | 17.0 | 2057 | nan |
| 0.0 | 18.0 | 2178 | nan |
| 0.0 | 19.0 | 2299 | nan |
| 0.0 | 20.0 | 2420 | nan |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
lora-library/B-LoRA-child | lora-library | 2024-05-18T07:58:56Z | 16 | 1 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-18T07:58:17Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: A [v19]
widget:
- text: ' '
output:
url: image_0.png
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - lora-library/B-LoRA-child
<Gallery />
## Model description
These are lora-library/B-LoRA-child LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use A [v19] to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](lora-library/B-LoRA-child/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
thanhduc1180/vistral_abmusu2022 | thanhduc1180 | 2024-05-18T07:53:05Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-04-05T08:10:52Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
StudentDHBW/q-Taxi-v3-3 | StudentDHBW | 2024-05-18T07:48:00Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-18T07:47:58Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="StudentDHBW/q-Taxi-v3-3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
RichardErkhov/ZySec-AI_-_ZySec-7B-8bits | RichardErkhov | 2024-05-18T07:45:04Z | 78 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-18T07:39:40Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
ZySec-7B - bnb 8bits
- Model creator: https://huggingface.co/ZySec-AI/
- Original model: https://huggingface.co/ZySec-AI/ZySec-7B/
Original model description:
---
library_name: transformers
license: apache-2.0
tags:
- security
- cybersecwithai
- threat
- vulnerability
- infosec
- zysec.ai
- cyber security
- ai4security
- llmsecurity
- cyber
- malware analysis
- exploitdev
- ai4good
- aisecurity
- threat
- cybersec
- cybersecurity
---
# ZySec-7B
ZySec-7B, stands as a pivotal innovation for security professionals, leveraging the advanced capabilities of HuggingFace's Zephyr language model series. This AI model is crafted to be an omnipresent cybersecurity ally, offering on-demand, expert guidance in cybersecurity issues. Picture ZySec-7B as an ever-present digital teammate, adept at navigating the complexities of security challenges.
The efficacy of ZySec-7B lies in its comprehensive training across numerous cybersecurity fields, providing a deep and wide-ranging understanding of the sector. ZySec is developed using the DPO technique, utilizing a varied dataset encompassing critical topics such as:
- Sophisticated areas like Attack Surface Threats, Cloud Security, and the Cyber Kill Chain.
- Key compliance and regulatory frameworks, including CIS Controls, FedRAMP, PCI DSS, and ISO/IEC 27001.
- Practical aspects like Cloud Secure Migration, Data Exfiltration Techniques, and Security Incident Handling.
- Crucial strategic fields such as Security Governance, Risk Management, and Security Architecture Review.
ZySec-7B's training spans over 30 unique domains, each enriched with thousands of data points, delivering unparalleled expertise.
As the first of its kind in an open-source, AI-driven cybersecurity series, ZySec-7B transcends the conventional role of a support tool, redefining organizational security approaches. Its open-source nature not only invites community contributions but also enhances its flexibility and transparency in managing vast cybersecurity data. ZySec-7B is instrumental in providing vital, actionable insights for strategic decision-making and advanced risk management. More than a mere software, ZySec-7B is a community-enhanced strategic tool, equipping your team to proactively confront and stay ahead of the dynamic landscape of cyber threats and regulatory demands.
# For suggestions please use [Road Map](https://zysec-ai.productlift.dev/t/roadmap)
<img src="https://huggingface.co/aihub-app/ZySec-7B-v1/resolve/main/ZySec-7B-dataset-composition.png?download=true" alt="Dataset Distribution" width="90%"/>
Details of dataset distribution here - [Dataset Distribution](https://huggingface.co/aihub-app/ZySec-7B/resolve/main/ZySec-7B-dataset-composition.png?download=true)
Fully compatible with [LM Studio](https://lmstudio.ai). Search for “Zysec” and here is what you get. Here is a sample output of ZySec writing email to John about database security using LM Studio:
<img src="https://huggingface.co/aihub-app/ZySec-7B-v1/resolve/main/sample-output.png" alt="Sample Output" width="90%"/>
---
The training is funded by [AttackIO](https://www.attackio.app), the mobile app for Cyber Security professionals.
Official GGUF version is hosted here - [ZySec-7B-v1-GGUF on HuggingFace](https://huggingface.co/aihub-app/ZySec-7B-v1-GGUF)
## [ZySec AI: Unleashing the Potential of the ZySec Series Model](https://github.com/ZySec-AI/ZySec)
Project ZySec, an integral part of ZySec AI, stands at the forefront of integrating Artificial Intelligence into Cybersecurity. Centered around the innovative ZySec 7B model, it's designed to revolutionize the cybersecurity landscape with AI-driven solutions. ZySec AI isn't just a tool, it's a transformative approach, blending AI's cutting-edge capabilities with the unique intricacies of cybersecurity, while ensuring privacy and security.
### Discover the Key Features of Project ZySec
- **AI-Driven Cybersecurity:** Tap into the power of the ZySec 7B model, a bespoke AI solution fine-tuned for cybersecurity.
- **24/7 Expert Assistance:** Benefit from round-the-clock support and expert advice, guaranteeing smooth operations during any SOC shift.
- **Efficient Playbook Access:** Streamline your workflow with quick and easy access to playbooks and documents, enhancing information retrieval.
- **Standards Explorer:** Navigate various standards with ease, akin to a seasoned expert's proficiency.
- **Ongoing Internet Research:** Leverage AI-enabled, thorough internet research for exhaustive insights. (Note: Internet use is optional and specific to this feature).
### About Project ZySec by ZySec AI
ZySec AI an opensource project with a vision towards fusioning of Cybersecurity with Artificial Intelligence. Our goal is to transform the way security professionals engage with technology. More than a mere tool, ZySec AI symbolizes a comprehensive strategy to augment security operations, merging the innovative essence of AI with cybersecurity's distinctive challenges, always ensuring privacy and security.
https://github.com/ZySec-AI/ZySec
### The ZySec Roadmap
https://github.com/ZySec-AI/.github/blob/main/roadmap.md
|
vuongnhathien/vit-base-oxford-iiit-pets | vuongnhathien | 2024-05-18T07:39:22Z | 222 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-18T07:24:31Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base-oxford-iiit-pets
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2076
- Accuracy: 0.9378
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.7188 | 1.0 | 185 | 0.3688 | 0.9147 |
| 0.2918 | 2.0 | 370 | 0.2578 | 0.9337 |
| 0.2057 | 3.0 | 555 | 0.2298 | 0.9364 |
| 0.1784 | 4.0 | 740 | 0.2196 | 0.9391 |
| 0.1688 | 5.0 | 925 | 0.2167 | 0.9405 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
RichardErkhov/ZySec-AI_-_ZySec-7B-4bits | RichardErkhov | 2024-05-18T07:39:11Z | 78 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-18T07:36:00Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
ZySec-7B - bnb 4bits
- Model creator: https://huggingface.co/ZySec-AI/
- Original model: https://huggingface.co/ZySec-AI/ZySec-7B/
Original model description:
---
library_name: transformers
license: apache-2.0
tags:
- security
- cybersecwithai
- threat
- vulnerability
- infosec
- zysec.ai
- cyber security
- ai4security
- llmsecurity
- cyber
- malware analysis
- exploitdev
- ai4good
- aisecurity
- threat
- cybersec
- cybersecurity
---
# ZySec-7B
ZySec-7B, stands as a pivotal innovation for security professionals, leveraging the advanced capabilities of HuggingFace's Zephyr language model series. This AI model is crafted to be an omnipresent cybersecurity ally, offering on-demand, expert guidance in cybersecurity issues. Picture ZySec-7B as an ever-present digital teammate, adept at navigating the complexities of security challenges.
The efficacy of ZySec-7B lies in its comprehensive training across numerous cybersecurity fields, providing a deep and wide-ranging understanding of the sector. ZySec is developed using the DPO technique, utilizing a varied dataset encompassing critical topics such as:
- Sophisticated areas like Attack Surface Threats, Cloud Security, and the Cyber Kill Chain.
- Key compliance and regulatory frameworks, including CIS Controls, FedRAMP, PCI DSS, and ISO/IEC 27001.
- Practical aspects like Cloud Secure Migration, Data Exfiltration Techniques, and Security Incident Handling.
- Crucial strategic fields such as Security Governance, Risk Management, and Security Architecture Review.
ZySec-7B's training spans over 30 unique domains, each enriched with thousands of data points, delivering unparalleled expertise.
As the first of its kind in an open-source, AI-driven cybersecurity series, ZySec-7B transcends the conventional role of a support tool, redefining organizational security approaches. Its open-source nature not only invites community contributions but also enhances its flexibility and transparency in managing vast cybersecurity data. ZySec-7B is instrumental in providing vital, actionable insights for strategic decision-making and advanced risk management. More than a mere software, ZySec-7B is a community-enhanced strategic tool, equipping your team to proactively confront and stay ahead of the dynamic landscape of cyber threats and regulatory demands.
# For suggestions please use [Road Map](https://zysec-ai.productlift.dev/t/roadmap)
<img src="https://huggingface.co/aihub-app/ZySec-7B-v1/resolve/main/ZySec-7B-dataset-composition.png?download=true" alt="Dataset Distribution" width="90%"/>
Details of dataset distribution here - [Dataset Distribution](https://huggingface.co/aihub-app/ZySec-7B/resolve/main/ZySec-7B-dataset-composition.png?download=true)
Fully compatible with [LM Studio](https://lmstudio.ai). Search for “Zysec” and here is what you get. Here is a sample output of ZySec writing email to John about database security using LM Studio:
<img src="https://huggingface.co/aihub-app/ZySec-7B-v1/resolve/main/sample-output.png" alt="Sample Output" width="90%"/>
---
The training is funded by [AttackIO](https://www.attackio.app), the mobile app for Cyber Security professionals.
Official GGUF version is hosted here - [ZySec-7B-v1-GGUF on HuggingFace](https://huggingface.co/aihub-app/ZySec-7B-v1-GGUF)
## [ZySec AI: Unleashing the Potential of the ZySec Series Model](https://github.com/ZySec-AI/ZySec)
Project ZySec, an integral part of ZySec AI, stands at the forefront of integrating Artificial Intelligence into Cybersecurity. Centered around the innovative ZySec 7B model, it's designed to revolutionize the cybersecurity landscape with AI-driven solutions. ZySec AI isn't just a tool, it's a transformative approach, blending AI's cutting-edge capabilities with the unique intricacies of cybersecurity, while ensuring privacy and security.
### Discover the Key Features of Project ZySec
- **AI-Driven Cybersecurity:** Tap into the power of the ZySec 7B model, a bespoke AI solution fine-tuned for cybersecurity.
- **24/7 Expert Assistance:** Benefit from round-the-clock support and expert advice, guaranteeing smooth operations during any SOC shift.
- **Efficient Playbook Access:** Streamline your workflow with quick and easy access to playbooks and documents, enhancing information retrieval.
- **Standards Explorer:** Navigate various standards with ease, akin to a seasoned expert's proficiency.
- **Ongoing Internet Research:** Leverage AI-enabled, thorough internet research for exhaustive insights. (Note: Internet use is optional and specific to this feature).
### About Project ZySec by ZySec AI
ZySec AI an opensource project with a vision towards fusioning of Cybersecurity with Artificial Intelligence. Our goal is to transform the way security professionals engage with technology. More than a mere tool, ZySec AI symbolizes a comprehensive strategy to augment security operations, merging the innovative essence of AI with cybersecurity's distinctive challenges, always ensuring privacy and security.
https://github.com/ZySec-AI/ZySec
### The ZySec Roadmap
https://github.com/ZySec-AI/.github/blob/main/roadmap.md
|
StudentDHBW/q-Taxi-v3-2 | StudentDHBW | 2024-05-18T07:34:05Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-18T07:34:03Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="StudentDHBW/q-Taxi-v3-2", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
euiyulsong/Mistral-7B-ORPO-sft-synth-500 | euiyulsong | 2024-05-18T07:32:03Z | 79 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"orpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-18T07:27:49Z | ---
library_name: transformers
tags:
- trl
- sft
- orpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
StudentDHBW/q-Taxi-v3 | StudentDHBW | 2024-05-18T07:29:25Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-18T07:29:23Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="StudentDHBW/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Fitti/PPO-Huggy | Fitti | 2024-05-18T07:29:02Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2024-05-18T07:28:56Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Fitti/PPO-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
yaswanthchittepu/pythia-1b-tldr-dpo-beta-0.01-alpha-0-LATEST | yaswanthchittepu | 2024-05-18T07:04:24Z | 211 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-18T07:01:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
amaye15/google-vit-base-patch16-224-batch32-lr5e-05-standford-dogs | amaye15 | 2024-05-18T06:59:37Z | 224 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:stanford-dogs",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-18T06:59:16Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- stanford-dogs
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: google-vit-base-patch16-224-batch32-lr5e-05-standford-dogs
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: stanford-dogs
type: stanford-dogs
config: default
split: full
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8865403304178814
- name: F1
type: f1
value: 0.8829055367708631
- name: Precision
type: precision
value: 0.8892817099907323
- name: Recall
type: recall
value: 0.8836513270735221
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# google-vit-base-patch16-224-batch32-lr5e-05-standford-dogs
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the stanford-dogs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4497
- Accuracy: 0.8865
- F1: 0.8829
- Precision: 0.8893
- Recall: 0.8837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 4.7916 | 0.0777 | 10 | 4.5904 | 0.0328 | 0.0240 | 0.0321 | 0.0343 |
| 4.5526 | 0.1553 | 20 | 4.2901 | 0.1118 | 0.0891 | 0.1068 | 0.1134 |
| 4.2946 | 0.2330 | 30 | 3.9659 | 0.2602 | 0.2124 | 0.2287 | 0.2522 |
| 3.9673 | 0.3107 | 40 | 3.6288 | 0.4351 | 0.3666 | 0.4093 | 0.4189 |
| 3.69 | 0.3883 | 50 | 3.3225 | 0.5394 | 0.4751 | 0.5232 | 0.5244 |
| 3.4705 | 0.4660 | 60 | 3.0343 | 0.6261 | 0.5750 | 0.6563 | 0.6139 |
| 3.2239 | 0.5437 | 70 | 2.7671 | 0.6842 | 0.6503 | 0.7272 | 0.6743 |
| 2.9986 | 0.6214 | 80 | 2.5191 | 0.7262 | 0.6971 | 0.7601 | 0.7161 |
| 2.7575 | 0.6990 | 90 | 2.2953 | 0.7430 | 0.7162 | 0.7735 | 0.7333 |
| 2.5923 | 0.7767 | 100 | 2.1008 | 0.7694 | 0.7470 | 0.7956 | 0.7600 |
| 2.4265 | 0.8544 | 110 | 1.9250 | 0.7949 | 0.7762 | 0.8094 | 0.7863 |
| 2.3049 | 0.9320 | 120 | 1.7636 | 0.8054 | 0.7861 | 0.8173 | 0.7971 |
| 2.1243 | 1.0097 | 130 | 1.6290 | 0.8200 | 0.8056 | 0.8382 | 0.8125 |
| 1.9721 | 1.0874 | 140 | 1.5121 | 0.8226 | 0.8084 | 0.8396 | 0.8149 |
| 1.848 | 1.1650 | 150 | 1.4282 | 0.8163 | 0.8002 | 0.8362 | 0.8083 |
| 1.775 | 1.2427 | 160 | 1.3034 | 0.8304 | 0.8171 | 0.8438 | 0.8238 |
| 1.717 | 1.3204 | 170 | 1.2343 | 0.8275 | 0.8126 | 0.8460 | 0.8207 |
| 1.6203 | 1.3981 | 180 | 1.1554 | 0.8387 | 0.8259 | 0.8552 | 0.8323 |
| 1.5739 | 1.4757 | 190 | 1.0944 | 0.8484 | 0.8384 | 0.8593 | 0.8420 |
| 1.5508 | 1.5534 | 200 | 1.0400 | 0.8484 | 0.8394 | 0.8574 | 0.8431 |
| 1.4549 | 1.6311 | 210 | 0.9943 | 0.8452 | 0.8340 | 0.8497 | 0.8399 |
| 1.3907 | 1.7087 | 220 | 0.9427 | 0.8596 | 0.8480 | 0.8627 | 0.8542 |
| 1.3497 | 1.7864 | 230 | 0.8936 | 0.8569 | 0.8461 | 0.8647 | 0.8516 |
| 1.2618 | 1.8641 | 240 | 0.8619 | 0.8613 | 0.8503 | 0.8671 | 0.8560 |
| 1.3014 | 1.9417 | 250 | 0.8324 | 0.8603 | 0.8508 | 0.8737 | 0.8553 |
| 1.2209 | 2.0194 | 260 | 0.8015 | 0.8591 | 0.8503 | 0.8645 | 0.8537 |
| 1.2139 | 2.0971 | 270 | 0.7824 | 0.8596 | 0.8517 | 0.8656 | 0.8544 |
| 1.1364 | 2.1748 | 280 | 0.7544 | 0.8603 | 0.8513 | 0.8611 | 0.8556 |
| 1.1811 | 2.2524 | 290 | 0.7283 | 0.8683 | 0.8605 | 0.8785 | 0.8637 |
| 1.1316 | 2.3301 | 300 | 0.7169 | 0.8635 | 0.8550 | 0.8653 | 0.8590 |
| 1.1246 | 2.4078 | 310 | 0.6900 | 0.8686 | 0.8610 | 0.8739 | 0.8645 |
| 1.1027 | 2.4854 | 320 | 0.6862 | 0.8627 | 0.8548 | 0.8730 | 0.8582 |
| 1.0911 | 2.5631 | 330 | 0.6667 | 0.8693 | 0.8632 | 0.8730 | 0.8653 |
| 1.0158 | 2.6408 | 340 | 0.6544 | 0.8695 | 0.8628 | 0.8751 | 0.8651 |
| 1.0805 | 2.7184 | 350 | 0.6342 | 0.8703 | 0.8634 | 0.8733 | 0.8663 |
| 1.0679 | 2.7961 | 360 | 0.6276 | 0.8754 | 0.8689 | 0.8797 | 0.8713 |
| 1.0611 | 2.8738 | 370 | 0.6223 | 0.8746 | 0.8692 | 0.8807 | 0.8705 |
| 0.9996 | 2.9515 | 380 | 0.6055 | 0.8724 | 0.8661 | 0.8758 | 0.8683 |
| 1.0838 | 3.0291 | 390 | 0.6039 | 0.8715 | 0.8652 | 0.8769 | 0.8677 |
| 0.9396 | 3.1068 | 400 | 0.5946 | 0.8737 | 0.8676 | 0.8791 | 0.8699 |
| 0.8466 | 3.1845 | 410 | 0.5810 | 0.8717 | 0.8653 | 0.8775 | 0.8673 |
| 0.9588 | 3.2621 | 420 | 0.5819 | 0.8710 | 0.8651 | 0.8766 | 0.8671 |
| 0.9784 | 3.3398 | 430 | 0.5742 | 0.8754 | 0.8684 | 0.8788 | 0.8716 |
| 0.9289 | 3.4175 | 440 | 0.5667 | 0.8768 | 0.8703 | 0.8792 | 0.8731 |
| 0.8917 | 3.4951 | 450 | 0.5615 | 0.8724 | 0.8672 | 0.8762 | 0.8690 |
| 0.8646 | 3.5728 | 460 | 0.5537 | 0.8737 | 0.8681 | 0.8761 | 0.8702 |
| 0.9029 | 3.6505 | 470 | 0.5538 | 0.8732 | 0.8694 | 0.8771 | 0.8698 |
| 0.9551 | 3.7282 | 480 | 0.5440 | 0.8766 | 0.8720 | 0.8809 | 0.8735 |
| 0.8787 | 3.8058 | 490 | 0.5448 | 0.8751 | 0.8704 | 0.8791 | 0.8712 |
| 0.9128 | 3.8835 | 500 | 0.5354 | 0.8751 | 0.8701 | 0.8799 | 0.8712 |
| 0.8566 | 3.9612 | 510 | 0.5262 | 0.8776 | 0.8715 | 0.8846 | 0.8738 |
| 0.8624 | 4.0388 | 520 | 0.5252 | 0.8754 | 0.8692 | 0.8840 | 0.8715 |
| 0.799 | 4.1165 | 530 | 0.5197 | 0.8763 | 0.8702 | 0.8817 | 0.8723 |
| 0.7912 | 4.1942 | 540 | 0.5213 | 0.8751 | 0.8695 | 0.8815 | 0.8709 |
| 0.874 | 4.2718 | 550 | 0.5142 | 0.8778 | 0.8730 | 0.8862 | 0.8742 |
| 0.766 | 4.3495 | 560 | 0.5019 | 0.8817 | 0.8770 | 0.8864 | 0.8783 |
| 0.8902 | 4.4272 | 570 | 0.5011 | 0.8831 | 0.8785 | 0.8887 | 0.8798 |
| 0.8038 | 4.5049 | 580 | 0.5014 | 0.8800 | 0.8742 | 0.8878 | 0.8762 |
| 0.8893 | 4.5825 | 590 | 0.5062 | 0.8797 | 0.8744 | 0.8851 | 0.8759 |
| 0.7868 | 4.6602 | 600 | 0.4926 | 0.8827 | 0.8785 | 0.8867 | 0.8791 |
| 0.7733 | 4.7379 | 610 | 0.4957 | 0.8783 | 0.8749 | 0.8816 | 0.8755 |
| 0.8275 | 4.8155 | 620 | 0.4871 | 0.8817 | 0.8781 | 0.8847 | 0.8785 |
| 0.7944 | 4.8932 | 630 | 0.4855 | 0.8858 | 0.8823 | 0.8880 | 0.8829 |
| 0.8483 | 4.9709 | 640 | 0.4849 | 0.8836 | 0.8797 | 0.8858 | 0.8803 |
| 0.7297 | 5.0485 | 650 | 0.4833 | 0.8814 | 0.8779 | 0.8845 | 0.8784 |
| 0.754 | 5.1262 | 660 | 0.4824 | 0.8814 | 0.8775 | 0.8844 | 0.8782 |
| 0.698 | 5.2039 | 670 | 0.4806 | 0.8851 | 0.8818 | 0.8878 | 0.8821 |
| 0.7515 | 5.2816 | 680 | 0.4777 | 0.8824 | 0.8791 | 0.8855 | 0.8796 |
| 0.7527 | 5.3592 | 690 | 0.4711 | 0.8841 | 0.8806 | 0.8869 | 0.8808 |
| 0.7287 | 5.4369 | 700 | 0.4718 | 0.8853 | 0.8819 | 0.8873 | 0.8824 |
| 0.8134 | 5.5146 | 710 | 0.4680 | 0.8856 | 0.8826 | 0.8885 | 0.8828 |
| 0.7655 | 5.5922 | 720 | 0.4688 | 0.8836 | 0.8795 | 0.8862 | 0.8800 |
| 0.7904 | 5.6699 | 730 | 0.4671 | 0.8878 | 0.8841 | 0.8901 | 0.8846 |
| 0.7257 | 5.7476 | 740 | 0.4704 | 0.8824 | 0.8790 | 0.8872 | 0.8796 |
| 0.7342 | 5.8252 | 750 | 0.4641 | 0.8841 | 0.8802 | 0.8889 | 0.8810 |
| 0.7075 | 5.9029 | 760 | 0.4654 | 0.8824 | 0.8782 | 0.8865 | 0.8791 |
| 0.7924 | 5.9806 | 770 | 0.4619 | 0.8868 | 0.8829 | 0.8899 | 0.8839 |
| 0.7176 | 6.0583 | 780 | 0.4597 | 0.8861 | 0.8815 | 0.8889 | 0.8829 |
| 0.6768 | 6.1359 | 790 | 0.4595 | 0.8858 | 0.8820 | 0.8910 | 0.8827 |
| 0.722 | 6.2136 | 800 | 0.4605 | 0.8836 | 0.8796 | 0.8882 | 0.8803 |
| 0.7429 | 6.2913 | 810 | 0.4594 | 0.8865 | 0.8823 | 0.8912 | 0.8833 |
| 0.6904 | 6.3689 | 820 | 0.4611 | 0.8856 | 0.8821 | 0.8892 | 0.8825 |
| 0.7617 | 6.4466 | 830 | 0.4592 | 0.8856 | 0.8816 | 0.8879 | 0.8826 |
| 0.7285 | 6.5243 | 840 | 0.4576 | 0.8863 | 0.8822 | 0.8895 | 0.8832 |
| 0.686 | 6.6019 | 850 | 0.4561 | 0.8875 | 0.8834 | 0.8923 | 0.8844 |
| 0.6546 | 6.6796 | 860 | 0.4561 | 0.8865 | 0.8824 | 0.8903 | 0.8835 |
| 0.6526 | 6.7573 | 870 | 0.4543 | 0.8875 | 0.8830 | 0.8917 | 0.8844 |
| 0.7534 | 6.8350 | 880 | 0.4537 | 0.8885 | 0.8845 | 0.8927 | 0.8855 |
| 0.7065 | 6.9126 | 890 | 0.4535 | 0.8870 | 0.8831 | 0.8912 | 0.8841 |
| 0.774 | 6.9903 | 900 | 0.4528 | 0.8878 | 0.8842 | 0.8924 | 0.8849 |
| 0.7185 | 7.0680 | 910 | 0.4516 | 0.8880 | 0.8840 | 0.8913 | 0.8849 |
| 0.6321 | 7.1456 | 920 | 0.4526 | 0.8868 | 0.8830 | 0.8900 | 0.8838 |
| 0.6957 | 7.2233 | 930 | 0.4517 | 0.8865 | 0.8825 | 0.8901 | 0.8834 |
| 0.6774 | 7.3010 | 940 | 0.4523 | 0.8863 | 0.8823 | 0.8895 | 0.8833 |
| 0.6915 | 7.3786 | 950 | 0.4528 | 0.8853 | 0.8814 | 0.8890 | 0.8822 |
| 0.6738 | 7.4563 | 960 | 0.4520 | 0.8868 | 0.8829 | 0.8901 | 0.8838 |
| 0.7021 | 7.5340 | 970 | 0.4510 | 0.8863 | 0.8826 | 0.8897 | 0.8834 |
| 0.7053 | 7.6117 | 980 | 0.4501 | 0.8863 | 0.8827 | 0.8885 | 0.8835 |
| 0.7241 | 7.6893 | 990 | 0.4498 | 0.8865 | 0.8829 | 0.8893 | 0.8837 |
| 0.703 | 7.7670 | 1000 | 0.4497 | 0.8865 | 0.8829 | 0.8893 | 0.8837 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
vijayhn/llama2-7b-base-w-ft-sql-18052024 | vijayhn | 2024-05-18T06:56:12Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-18T06:46:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
blockblockblock/open_llama_3b_v2-bpw4-exl2 | blockblockblock | 2024-05-18T06:50:20Z | 4 | 0 | transformers | [
"transformers",
"llama",
"text-generation",
"dataset:tiiuae/falcon-refinedweb",
"dataset:bigcode/starcoderdata",
"dataset:togethercomputer/RedPajama-Data-1T",
"arxiv:2302.13971",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"exl2",
"region:us"
] | text-generation | 2024-05-18T03:39:02Z | ---
license: apache-2.0
datasets:
- tiiuae/falcon-refinedweb
- bigcode/starcoderdata
- togethercomputer/RedPajama-Data-1T
---
# OpenLLaMA: An Open Reproduction of LLaMA
**TL;DR**: we are releasing our public preview of OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA. We are releasing a series of 3B, 7B and 13B models trained on different data mixtures. Our model weights can serve as the drop in replacement of LLaMA in existing implementations.
In this repo, we present a permissively licensed open source reproduction of Meta AI's [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) large language model. We are releasing a series of 3B, 7B and 13B models trained on 1T tokens. We provide PyTorch and JAX weights of pre-trained OpenLLaMA models, as well as evaluation results and comparison against the original LLaMA models. The v2 model is better than the old v1 model trained on a different data mixture. Please see the [project homepage of OpenLLaMA](https://github.com/openlm-research/open_llama) for more details.
## Weights Release, License and Usage
We release the weights in two formats: an EasyLM format to be use with our [EasyLM framework](https://github.com/young-geng/EasyLM), and a PyTorch format to be used with the [Hugging Face transformers](https://huggingface.co/docs/transformers/index) library. Both our training framework EasyLM and the checkpoint weights are licensed permissively under the Apache 2.0 license.
### Loading the Weights with Hugging Face Transformers
Preview checkpoints can be directly loaded from Hugging Face Hub. **Please note that it is advised to avoid using the Hugging Face fast tokenizer for now, as we’ve observed that** [**the auto-converted fast tokenizer sometimes gives incorrect tokenizations**](https://github.com/huggingface/transformers/issues/24233)**.** This can be achieved by directly using the `LlamaTokenizer` class, or passing in the `use_fast=False` option for the `AutoTokenizer` class. See the following example for usage.
```python
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM
## v2 models
model_path = 'openlm-research/open_llama_3b_v2'
# model_path = 'openlm-research/open_llama_7b_v2'
## v1 models
# model_path = 'openlm-research/open_llama_3b'
# model_path = 'openlm-research/open_llama_7b'
# model_path = 'openlm-research/open_llama_13b'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float16, device_map='auto',
)
prompt = 'Q: What is the largest animal?\nA:'
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=32
)
print(tokenizer.decode(generation_output[0]))
```
For more advanced usage, please follow the [transformers LLaMA documentation](https://huggingface.co/docs/transformers/main/model_doc/llama).
### Evaluating with LM-Eval-Harness
The model can be evaluated with [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness). However, due to the aforementioned tokenizer issue, we need to avoid using the fast tokenizer to obtain the correct results. This can be achieved by passing in `use_fast=False` to [this part of lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness/blob/4b701e228768052cfae9043dca13e82052ca5eea/lm_eval/models/huggingface.py#LL313C9-L316C10), as shown in the example below:
```python
tokenizer = self.AUTO_TOKENIZER_CLASS.from_pretrained(
pretrained if tokenizer is None else tokenizer,
revision=revision + ("/" + subfolder if subfolder is not None else ""),
use_fast=False
)
```
### Loading the Weights with EasyLM
For using the weights in our EasyLM framework, please refer to the [LLaMA documentation of EasyLM](https://github.com/young-geng/EasyLM/blob/main/docs/llama.md). Note that unlike the original LLaMA model, our OpenLLaMA tokenizer and weights are trained completely from scratch so it is no longer needed to obtain the original LLaMA tokenizer and weights.
## Dataset and Training
The v1 models are trained on the [RedPajama dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T). The v2 models are trained on a mixture of the [Falcon refined-web dataset](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), the [StarCoder dataset](https://huggingface.co/datasets/bigcode/starcoderdata) and the wikipedia, arxiv, book and stackexchange part of the [RedPajama dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T). We follow the exactly same preprocessing steps and training hyperparameters as the original LLaMA paper, including model architecture, context length, training steps, learning rate schedule, and optimizer. The only difference between our setting and the original one is the dataset used: OpenLLaMA employs open datasets rather than the one utilized by the original LLaMA.
We train the models on cloud TPU-v4s using [EasyLM](https://github.com/young-geng/EasyLM), a JAX based training pipeline we developed for training and fine-tuning large language models. We employ a combination of normal data parallelism and fully sharded data parallelism [](https://engineering.fb.com/2021/07/15/open-source/fsdp/)(also know as ZeRO stage 3) to balance the training throughput and memory usage. Overall we reach a throughput of over 2200 tokens / second / TPU-v4 chip for our 7B model.
## Evaluation
We evaluated OpenLLaMA on a wide range of tasks using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). The LLaMA results are generated by running the original LLaMA model on the same evaluation metrics. We note that our results for the LLaMA model differ slightly from the original LLaMA paper, which we believe is a result of different evaluation protocols. Similar differences have been reported in [this issue of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/issues/443). Additionally, we present the results of GPT-J, a 6B parameter model trained on the [Pile](https://pile.eleuther.ai/) dataset by [EleutherAI](https://www.eleuther.ai/).
The original LLaMA model was trained for 1 trillion tokens and GPT-J was trained for 500 billion tokens. We present the results in the table below. OpenLLaMA exhibits comparable performance to the original LLaMA and GPT-J across a majority of tasks, and outperforms them in some tasks.
| **Task/Metric** | GPT-J 6B | LLaMA 7B | LLaMA 13B | OpenLLaMA 3Bv2 | OpenLLaMA 7Bv2 | OpenLLaMA 3B | OpenLLaMA 7B | OpenLLaMA 13B |
| ---------------------- | -------- | -------- | --------- | -------------- | -------------- | ------------ | ------------ | ------------- |
| anli_r1/acc | 0.32 | 0.35 | 0.35 | 0.33 | 0.34 | 0.33 | 0.33 | 0.33 |
| anli_r2/acc | 0.34 | 0.34 | 0.36 | 0.36 | 0.35 | 0.32 | 0.36 | 0.33 |
| anli_r3/acc | 0.35 | 0.37 | 0.39 | 0.38 | 0.39 | 0.35 | 0.38 | 0.40 |
| arc_challenge/acc | 0.34 | 0.39 | 0.44 | 0.34 | 0.39 | 0.34 | 0.37 | 0.41 |
| arc_challenge/acc_norm | 0.37 | 0.41 | 0.44 | 0.36 | 0.41 | 0.37 | 0.38 | 0.44 |
| arc_easy/acc | 0.67 | 0.68 | 0.75 | 0.68 | 0.73 | 0.69 | 0.72 | 0.75 |
| arc_easy/acc_norm | 0.62 | 0.52 | 0.59 | 0.63 | 0.70 | 0.65 | 0.68 | 0.70 |
| boolq/acc | 0.66 | 0.75 | 0.71 | 0.66 | 0.72 | 0.68 | 0.71 | 0.75 |
| hellaswag/acc | 0.50 | 0.56 | 0.59 | 0.52 | 0.56 | 0.49 | 0.53 | 0.56 |
| hellaswag/acc_norm | 0.66 | 0.73 | 0.76 | 0.70 | 0.75 | 0.67 | 0.72 | 0.76 |
| openbookqa/acc | 0.29 | 0.29 | 0.31 | 0.26 | 0.30 | 0.27 | 0.30 | 0.31 |
| openbookqa/acc_norm | 0.38 | 0.41 | 0.42 | 0.38 | 0.41 | 0.40 | 0.40 | 0.43 |
| piqa/acc | 0.75 | 0.78 | 0.79 | 0.77 | 0.79 | 0.75 | 0.76 | 0.77 |
| piqa/acc_norm | 0.76 | 0.78 | 0.79 | 0.78 | 0.80 | 0.76 | 0.77 | 0.79 |
| record/em | 0.88 | 0.91 | 0.92 | 0.87 | 0.89 | 0.88 | 0.89 | 0.91 |
| record/f1 | 0.89 | 0.91 | 0.92 | 0.88 | 0.89 | 0.89 | 0.90 | 0.91 |
| rte/acc | 0.54 | 0.56 | 0.69 | 0.55 | 0.57 | 0.58 | 0.60 | 0.64 |
| truthfulqa_mc/mc1 | 0.20 | 0.21 | 0.25 | 0.22 | 0.23 | 0.22 | 0.23 | 0.25 |
| truthfulqa_mc/mc2 | 0.36 | 0.34 | 0.40 | 0.35 | 0.35 | 0.35 | 0.35 | 0.38 |
| wic/acc | 0.50 | 0.50 | 0.50 | 0.50 | 0.50 | 0.48 | 0.51 | 0.47 |
| winogrande/acc | 0.64 | 0.68 | 0.70 | 0.63 | 0.66 | 0.62 | 0.67 | 0.70 |
| Average | 0.52 | 0.55 | 0.57 | 0.53 | 0.56 | 0.53 | 0.55 | 0.57 |
We removed the task CB and WSC from our benchmark, as our model performs suspiciously high on these two tasks. We hypothesize that there could be a benchmark data contamination in the training set.
## Contact
We would love to get feedback from the community. If you have any questions, please open an issue or contact us.
OpenLLaMA is developed by:
[Xinyang Geng](https://young-geng.xyz/)* and [Hao Liu](https://www.haoliu.site/)* from Berkeley AI Research.
*Equal Contribution
## Acknowledgment
We thank the [Google TPU Research Cloud](https://sites.research.google/trc/about/) program for providing part of the computation resources. We’d like to specially thank Jonathan Caton from TPU Research Cloud for helping us organizing compute resources, Rafi Witten from the Google Cloud team and James Bradbury from the Google JAX team for helping us optimizing our training throughput. We’d also want to thank Charlie Snell, Gautier Izacard, Eric Wallace, Lianmin Zheng and our user community for the discussions and feedback.
The OpenLLaMA 13B v1 model is trained in collaboration with [Stability AI](https://stability.ai/), and we thank Stability AI for providing the computation resources. We’d like to especially thank David Ha and Shivanshu Purohit for the coordinating the logistics and providing engineering support.
## Reference
If you found OpenLLaMA useful in your research or applications, please cite using the following BibTeX:
```
@software{openlm2023openllama,
author = {Geng, Xinyang and Liu, Hao},
title = {OpenLLaMA: An Open Reproduction of LLaMA},
month = May,
year = 2023,
url = {https://github.com/openlm-research/open_llama}
}
```
```
@software{together2023redpajama,
author = {Together Computer},
title = {RedPajama-Data: An Open Source Recipe to Reproduce LLaMA training dataset},
month = April,
year = 2023,
url = {https://github.com/togethercomputer/RedPajama-Data}
}
```
```
@article{touvron2023llama,
title={Llama: Open and efficient foundation language models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and others},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```
|
euiyulsong/Mistral-7B-ORPO-sft-sync-task_domain_20k | euiyulsong | 2024-05-18T06:50:02Z | 80 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"orpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-18T06:45:45Z | ---
library_name: transformers
tags:
- trl
- sft
- orpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
katk31/q-Taxi-v3-2 | katk31 | 2024-05-18T06:45:45Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-18T06:30:05Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="katk31/q-Taxi-v3-2", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Bengmane/Ok | Bengmane | 2024-05-18T06:22:09Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-18T06:22:09Z | ---
license: apache-2.0
---
|
DUAL-GPO/phi-2-gpo-20k-40k-60k-i1 | DUAL-GPO | 2024-05-18T06:19:33Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"phi",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"custom_code",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:DUAL-GPO-2/phi-2-gpo-v34-merged-i1",
"base_model:adapter:DUAL-GPO-2/phi-2-gpo-v34-merged-i1",
"region:us"
] | null | 2024-05-17T11:56:43Z | ---
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
base_model: DUAL-GPO-2/phi-2-gpo-v34-merged-i1
datasets:
- HuggingFaceH4/ultrafeedback_binarized
model-index:
- name: phi-2-gpo-20k-40k-60k-i1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2-gpo-20k-40k-60k-i1
This model is a fine-tuned version of [DUAL-GPO-2/phi-2-gpo-v34-merged-i1](https://huggingface.co/DUAL-GPO-2/phi-2-gpo-v34-merged-i1) on the HuggingFaceH4/ultrafeedback_binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.2.1+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2 |
damgomz/ThunBERT_bs8_lr4 | damgomz | 2024-05-18T06:19:28Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"pretraining",
"fill-mask",
"en",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-15T09:38:46Z | ---
language: en
tags:
- fill-mask
kwargs:
timestamp: '2024-05-11T13:59:43'
project_name: ThunBERT_bs8_lr4_emissions_tracker
run_id: 3345f532-5960-49ec-a891-053ef2514cfb
duration: 170213.14050722122
emissions: 0.1781595473385588
emissions_rate: 1.0466850374046206e-06
cpu_power: 42.5
gpu_power: 0.0
ram_power: 37.5
cpu_energy: 2.0094578674973738
gpu_energy: 0
ram_energy: 1.7730424475396678
energy_consumed: 3.782500315037023
country_name: Switzerland
country_iso_code: CHE
region: .nan
cloud_provider: .nan
cloud_region: .nan
os: Linux-5.14.0-70.30.1.el9_0.x86_64-x86_64-with-glibc2.34
python_version: 3.10.4
codecarbon_version: 2.3.4
cpu_count: 4
cpu_model: Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz
gpu_count: .nan
gpu_model: .nan
longitude: .nan
latitude: .nan
ram_total_size: 100
tracking_mode: machine
on_cloud: N
pue: 1.0
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 170213.14050722122 |
| Emissions (Co2eq in kg) | 0.1781595473385588 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 37.5 |
| CPU energy (kWh) | 2.0094578674973738 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 1.7730424475396678 |
| Consumed energy (kWh) | 3.782500315037023 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 4 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.32766029547640085 |
| Emissions (Co2eq in kg) | 0.06666681336532831 |
## Note
15 May 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ThunBERT_bs8_lr4 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 0.0005 |
| batch_size | 8 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 82827 |
## Training and Testing steps
Epoch | Train Loss | Test Loss
---|---|---
| 0.0 | 6.835651 | 13.557409 |
| 0.5 | 7.894119 | 7.782390 |
| 1.0 | 7.756104 | 7.761859 |
| 1.5 | 7.724590 | 7.737085 |
| 2.0 | 7.705112 | 7.713643 |
|
Armandodelca/Prototipo_8_EMI | Armandodelca | 2024-05-18T06:14:39Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:dccuchile/bert-base-spanish-wwm-cased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-cased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-14T03:36:16Z | ---
base_model: dccuchile/bert-base-spanish-wwm-cased
tags:
- generated_from_trainer
model-index:
- name: Prototipo_8_EMI
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Prototipo_8_EMI
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.7554
- eval_accuracy: 0.6506
- eval_runtime: 13.3415
- eval_samples_per_second: 374.771
- eval_steps_per_second: 7.495
- epoch: 1.3333
- step: 3200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 50
- eval_batch_size: 50
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 250
- num_epochs: 5
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
antitheft159/kaltstart.195 | antitheft159 | 2024-05-18T06:12:29Z | 0 | 0 | null | [
"license:cc-by-nd-4.0",
"region:us"
] | null | 2024-05-18T06:10:32Z | ---
license: cc-by-nd-4.0
---
|
katk31/q-Taxi-v3-1 | katk31 | 2024-05-18T06:11:37Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-18T06:08:41Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="katk31/q-Taxi-v3-1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ubaada/long-t5-tglobal-base | ubaada | 2024-05-18T06:08:04Z | 116 | 0 | transformers | [
"transformers",
"safetensors",
"longt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/long-t5-tglobal-base",
"base_model:finetune:google/long-t5-tglobal-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-17T02:15:18Z | ---
license: apache-2.0
base_model: google/long-t5-tglobal-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: long-t5-tglobal-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/theubaada/huggingface/runs/2p17lh0w)
# long-t5-tglobal-base
This model is a fine-tuned version of [google/long-t5-tglobal-base](https://huggingface.co/google/long-t5-tglobal-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9401
- Rouge1: 0.1934
- Rouge2: 0.0269
- Rougel: 0.1151
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 13
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|:------:|
| 1.5731 | 0.9996 | 600 | 1.9730 | 0.1342 | 0.0151 | 0.0912 |
| 1.3694 | 1.9996 | 1200 | 1.9623 | 0.1371 | 0.0175 | 0.0909 |
| 1.9561 | 2.9992 | 1800 | 1.9565 | 0.1423 | 0.0178 | 0.0928 |
| 1.0882 | 3.9996 | 2400 | 1.9548 | 0.1417 | 0.0186 | 0.0900 |
| 1.4872 | 4.9992 | 3000 | 1.9412 | 0.1581 | 0.0212 | 0.1006 |
| 1.4126 | 5.9988 | 3600 | 1.9486 | 0.1589 | 0.0188 | 0.0986 |
| 1.1634 | 7.0 | 4201 | 1.9464 | 0.1756 | 0.0229 | 0.1046 |
| 0.9541 | 7.9996 | 4801 | 1.9401 | 0.1791 | 0.0243 | 0.1078 |
| 0.9153 | 8.9975 | 5400 | 1.9401 | 0.1934 | 0.0269 | 0.1151 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.2.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Mithilss/gguf-try-promethus | Mithilss | 2024-05-18T06:03:00Z | 4 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:prometheus-eval/prometheus-7b-v2.0",
"base_model:adapter:prometheus-eval/prometheus-7b-v2.0",
"region:us"
] | null | 2024-05-18T06:02:37Z | ---
library_name: peft
base_model: prometheus-eval/prometheus-7b-v2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 |
thiagoquilice/tweets_deforestation_all_withoutRTorduplicate | thiagoquilice | 2024-05-18T05:57:49Z | 4 | 0 | bertopic | [
"bertopic",
"text-classification",
"region:us"
] | text-classification | 2024-05-18T05:57:44Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# tweets_deforestation_all_withoutRTorduplicate
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("thiagoquilice/tweets_deforestation_all_withoutRTorduplicate")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 733
* Number of training documents: 361202
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | do - que - os - no - de | 50 | Deforestation in Brazil |
| 0 | peti - assine - impedir - explora - via | 155764 | Preventing Deforestation in the Amazon |
| 1 | soja - planta - plantar - macron - depender | 45853 | Impact of soy plantations on the Amazon rainforest |
| 2 | aqui - isso - mat - essa - quem | 8183 | Political commentary |
| 3 | - - - - | 7767 | "Technology trends and innovations" |
| 4 | brasil - brasileiro - brasileiros - mundo - povo | 6290 | Deforestation in Brazil |
| 5 | governo - governos - atual - anteriores - pt | 3932 | Environmental policies and their impact on the Amazon rainforest |
| 6 | petition - sign - the - impedir - explora | 3820 | Stop Deforestation in the Amazon |
| 7 | ajudar - ajudem - custa - vamos - vamo | 3708 | Stop Deforestation |
| 8 | ele - presidente - dele - cara - esse | 2147 | Controversial presidential actions and statements on environmental issues |
| 9 | falar - vamos - carne - sobre - na | 2098 | Deforestation and meat consumption in the Amazon |
| 10 | noruega - alemanha - fundo - projetos - suspender | 1960 | Norway and Germany's funding cuts for Amazon projects |
| 11 | assinem - assinar - assinado - assina - abaixo | 1718 | Stop Amazon Deforestation |
| 12 | petition - sign - the - impedir - explora | 1653 | Stop Deforestation in the Amazon |
| 13 | zero - serra - governador - greenpeace - poss | 1438 | Zero deforestation campaign in the Amazon |
| 14 | alertas - alerta - batem - crescem - deter | 1430 | Deforestation alerts in Brazil increase in April, showing data from INPE. |
| 15 | diminuiu - diminui - cai - caiu - ritmo | 1360 | Deforestation rate in the Amazon decreases |
| 16 | carne - comer - consumo - pecu - veganos | 1328 | Impact of meat consumption on Amazonian deforestation |
| 17 | investidores - bancos - empresas - trilh - banco | 1315 | MPF processa bancos por financiamento do desmatamento na Amazônia |
| 18 | moratorium - soy - moratoria - extended - amazon | 1309 | Soy moratorium in Brazil |
| 19 | militares - armadas - militar - opera - for | 1233 | Combating deforestation in the Amazon with military forces |
| 20 | bioma - cerrado - conacer - agropecu - agroneg | 1225 | Deforestation in the Cerrado biome |
| 21 | tamanho - leia - sp - quase - legal | 1220 | Deforestation in the Amazon |
| 22 | manifesta - protesto - contra - protestar - protestos | 1169 | Protest against deforestation in the Amazon |
| 23 | lite - sat - monitorar - monitoramento - lan | 1059 | Monitoring deforestation in Brazil using satellite technology |
| 24 | petici - firma - la - impedir - explora | 1040 | Save the Amazon Rainforest |
| 25 | ganha - realidade - prote - dt - florestas | 973 | Deforestation alerts in Brazil |
| 26 | uhul - walmart - anunciou - comprar - carne | 966 | Walmart's announcement on not purchasing meat from Amazon rainforest deforestation |
| 27 | fogo - queimadas - fuma - chamas - queimada | 950 | Deforestation in the Amazon |
| 28 | uhul - walmart - anunciou - comprar - carne | 881 | Walmart's announcement on not purchasing meat from Amazon rainforest deforestation |
| 29 | gerando - hora - hectares - era - bolsonaro | 861 | Deforestation under Bolsonaro administration |
| 30 | acesse - sabia - hey - saiba - dt | 860 | Deforestation in the Amazon |
| 31 | destruction - billions - funds - fires - with | 792 | Deforestation in Brazil |
| 32 | best - partners - hesitate - raze - loot | 758 | Deforestation and land exploitation |
| 33 | deixe - continue - absurdo - voltou - crescer | 727 | Deforestation in the Amazon |
| 34 | chuva - chuvas - falta - sudeste - encurrala | 676 | Impact of deforestation on regional climate |
| 35 | simples - jeito - entenda - um - via | 657 | Understanding Deforestation in Simple Terms |
| 36 | perdeu - perde - km - metros - quadrados | 648 | Deforestation in the Amazon |
| 37 | diretor - galv - exonerado - demitido - ricardo | 609 | "Controversy surrounds Brazilian Institute of Space Research (INPE) director's dismissal over Amazon deforestation data" |
| 38 | menor - taxa - desde - segunda - registrada | 587 | Deforestation rates in Brazil |
| 39 | hypocrisy - talk - put - funding - did | 563 | Corporate hypocrisy in zero deforestation policies |
| 40 | ela - marido - dela - mulher - fez | 561 | Woman's husband involved in Amazon deforestation |
| 41 | triste - chorar - tristeza - eu - me | 541 | Deforestation in the Amazon rainforest |
| 42 | scandal - bragging - disappearance - while - massive | 533 | Environmental Scandals - BNP and Deforestation |
| 43 | trav - petici - firma - la - impedir | 517 | Preventing deforestation in the Amazon |
| 44 | pandemia - pandemias - xima - novas - epidemias | 516 | Pandemic and deforestation in the Amazon |
| 45 | petition - sign - help - save - please | 501 | Protect the Amazon Rainforest |
| 46 | entrevista - sobre - falar - professor - acha | 501 | Interview on Deforestation in the Amazon |
| 47 | aquecimento - global - clim - clima - mudan | 497 | Impact of deforestation on climate change |
| 48 | petici - firma - la - firm - impedir | 486 | Preventing deforestation in the Amazon |
| 49 | reduzir - incra - alternativa - ajudado - stop | 476 | Reducing deforestation in the Amazon |
| 50 | senado - vice - hamilton - senadores - senador | 472 | Brazilian Senate to debate deforestation and increased burnings in the Amazon |
| 51 | prestes - limite - atingir - irrevers - vel | 470 | Deforestation in the Amazon nearing irreversible limit |
| 52 | ministro - ambiente - meio - salles - ministros | 468 | Controversial Brazilian government officials and environmental policies |
| 53 | papel - trouxa - tanto - eu - minha | 466 | Deforestation due to paper production |
| 54 | quil - quadrados - metros - km - mil | 465 | Deforestation in the Amazon |
| 55 | carbono - emiss - xido - emite - atmosfera | 463 | Carbon Emissions in the Amazon Rainforest |
| 56 | ano - cresce - aumentou - passado - cresceu | 457 | Deforestation in the Amazon |
| 57 | impedir - explora - via - da - nia | 453 | Preventing deforestation in the Amazon |
| 58 | tamanho - leia - sp - quase - legal | 450 | Deforestation in the Amazon |
| 59 | tic - tac - recontar - ativistape - covid | 418 | Deforestation in the Amazon: Uncovering Lost Civilizations |
| 60 | comemorar - dia - comemora - comemorado - celebrar | 417 | Celebrating the Amazon Rainforest (Despite the Current Situation) |
| 61 | existe - sei - agora - entidades - eu | 407 | Existence of deforestation in the Amazon |
| 62 | tition - signez - la - impedir - explora | 388 | Preventing deforestation in the Amazon |
| 63 | partners - best - raze - hesitate - loot | 385 | Deforestation and exploitation of indigenous lands |
| 64 | bate - recorde - seguido - bateu - consecutivo | 377 | Deforestation in the Amazon sets new records |
| 65 | janeiro - aumenta - mais - em - na | 375 | Deforestation in the Amazon increases in January |
| 66 | veganos - comer - vegano - vegetarianos - vegetariano | 368 | Impact of veganism on the Amazon rainforest |
| 67 | boicote - supermercados - produtos - boicotar - alem | 364 | Boycott of German supermarkets due to Amazon deforestation |
| 68 | cumpadi - mariano - treasure - not - yuri | 359 | Deforestation and its consequences |
| 69 | agosto - cai - caiu - deste - diminuiu | 356 | Deforestation in the Amazon in August |
| 70 | dilma - corta - verba - rousseff - presidenta | 354 | Dilma Rousseff's presidency and environmental policies |
| 71 | corta - verba - dilma - contra - medi | 345 | Dilma Rousseff's policies on deforestation in the Amazon |
| 72 | ambiente - meio - ambiental - ambientalismo - ambientais | 345 | Environmentalism and Sustainability |
| 73 | dados - novos - saem - divulga - info | 337 | Deforestation in the Amazon |
| 74 | maio - aumenta - dobra - aumentou - cresce | 334 | Deforestation in May |
| 75 | pior - cinco - worst - ndice - rie | 332 | Deforestation in the Amazon |
| 76 | abril - maior - ltimos - favorecer - anos | 331 | Deforestation in Amazon in April reaches record high |
| 77 | firm - petici - la - impedir - explora | 331 | Preventing deforestation in the Amazon |
| 78 | gado - cria - pasto - ranching - cattle | 326 | Cattle ranching and deforestation in the Amazon |
| 79 | pf - deflagra - opera - df - exija | 326 | Environmental regulations and enforcement in Brazil |
| 80 | estradas - rodovia - rodovias - estrada - asfaltamento | 323 | Impact of roads on deforestation in the Amazon |
| 81 | blog - boletim - sad - imazonsad - post | 322 | Deforestation in the Amazon |
| 82 | setembro - sobe - aumentou - cresce - subiu | 322 | Deforestation in Amazon increases in September |
| 83 | desmatamentozero - voltou - crescer - dt - meses | 318 | Deforestation in the Amazon |
| 84 | mercosul - acordo - europeia - ue - uni | 316 | Mercosul-UE agreement at risk due to deforestation in the Amazon |
| 85 | aumentou - aumento - ano - desde - passado | 315 | Deforestation increases |
| 86 | fevereiro - fo - jornaloglobo - cresce - detecta | 311 | Deforestation in Amazon legal in February |
| 87 | graba - saliendo - quemada - camiones - una | 309 | Deforestation in the Amazon |
| 88 | estuda - sico - desmonte - levar - irrevers | 309 | Deforestation under Bolsonaro's government |
| 89 | acabar - parar - tarefa - precisamos - luta | 307 | Combating Deforestation in the Amazon |
| 90 | outubro - aumenta - aumentou - aponta - sobe | 303 | Deforestation in Amazon increases in October |
| 91 | aumentou - aumento - incontest - voltadas - inverte | 298 | Deforestation in the Amazon |
| 92 | peti - assine - impedir - explora - da | 295 | Preventing Deforestation in the Amazon |
| 93 | cresceu - cresce - final - luizdomingosdeluna - rela | 294 | Deforestation in the Amazon |
| 94 | junho - aumenta - aumentou - cresce - subir | 293 | Deforestation in Brazil |
| 95 | genas - ind - demarca - terras - gena | 292 | Indigenous land demarcation and deforestation in the Amazon |
| 96 | abril - ltimos - maior - dos - foi | 290 | Deforestation in the Amazon in April |
| 97 | setembro - cai - segundo - caiu - inpe | 290 | Deforestation in September |
| 98 | opera - todas - suspende - pantanal - minist | 290 | Environmental policy suspensions |
| 99 | multas - aplicou - somam - milh - aplica | 287 | Deforestation and environmental fines in Brazil |
| 100 | marina - ministra - silva - ela - era | 287 | Marina Silva's tenure as Minister of the Environment |
| 101 | petizione - firma - la - impedir - explora | 286 | Protecting the Amazon Rainforest |
| 102 | atingido - futebol - campos - atinge - estado | 286 | Deforestation in the Amazon reaches an area of 100 football fields |
| 103 | desmatou - mt - grosso - mato - estado | 284 | Deforestation in Mato Grosso, Brazil |
| 104 | mico - econ - valor - micos - desenvolvimento | 282 | Economic value of deforestation in the Amazon |
| 105 | sexta - acre - divulgados - prodes - ltima | 280 | Deforestation rates in the Amazon revealed |
| 106 | google - ferramenta - ajudar - combater - mostrado | 279 | Google tools to combat deforestation in the Amazon |
| 107 | papa - francisco - igreja - nodo - manifestou | 277 | Pope Francis and Amazon Rainforest Conservation |
| 108 | simples - jeito - entenda - youtube - video | 276 | Understanding Deforestation in Simple Terms |
| 109 | corriam - saudades - puro - estrelas - mpido | 275 | Loss of natural beauty and environmental degradation under Bolsonaro's presidency |
| 110 | mar - abril - atinge - entre - km | 272 | Deforestation in the Amazon |
| 111 | acabei - juntos - mp - corrup - dizer | 272 | Ending corruption and deforestation in the Amazon |
| 112 | principal - irrevers - causa - pecu - fator | 268 | Causes of Deforestation in the Amazon |
| 113 | marca - chega - hist - ria - maior | 266 | Deforestation in the Amazon reaches record high |
| 114 | profund - imperdo - devastador - intenso - sofrendo | 260 | Deforestation in the Amazon |
| 115 | coronav - rus - pandemia - ximo - corona | 260 | Impact of COVID-19 on Amazonian deforestation |
| 116 | vacina - vacinas - covid - cloroquina - mortos | 258 | "Vaccine controversy in Brazil" |
| 117 | julho - rela - cresce - mesmo - ao | 254 | Deforestation in the Amazon in July increases |
| 118 | aves - extin - harpia - filhotes - ave | 249 | Deforestation threatens Amazonian bird species |
| 119 | detecta - registra - estima - km - inpe | 244 | Deforestation monitoring in Brazil |
| 120 | gay - homofobia - lgbt - gays - racismo | 242 | LGBTQ+ rights and homophobia in Brazil |
| 121 | impedir - explora - via - da - nia | 242 | Preventing deforestation in the Amazon |
| 122 | plano - preven - ppcerrado - controle - pretende | 241 | Plan for preventing and controlling deforestation in the Cerrado |
| 123 | assinar - assinado - assinem - assinaturas - abaixo | 240 | Protecting the Amazon rainforest through signatures |
| 124 | legal - aumento - apontam - aumentou - revelam | 239 | Deforestation in Brazil's Amazon region |
| 125 | vetar - ligados - importa - produtos - fran | 238 | Deforestation of the Amazon and related products |
| 126 | traders - companies - soy - region - to | 236 | Food companies urge traders to avoid deforestation-linked soy |
| 127 | europa - europeus - totalidade - devastou - preocupada | 235 | European colonization and its impact on the Amazon rainforest |
| 128 | previa - apoiou - milhares - protegidas - cies | 234 | Macron's support for Amazon deforestation project |
| 129 | desmatamentozero - voltou - crescer - dt - meses | 233 | Deforestation in the Amazon |
| 130 | tweet - twitter - retweeted - hashtag - instagram | 233 | Deforestation in the Amazon |
| 131 | year - last - increased - compared - period | 233 | Deforestation in the Amazon |
| 132 | fund - destruction - billions - fires - with | 231 | Deforestation in Brazil |
| 133 | anual - taxa - cai - cerrado - ritmo | 230 | Deforestation rates in the Cerrado region |
| 134 | colours - follow - projeta - rostos - artista | 230 | Artist's project featuring indigenous faces on tree trunks in the Amazon rainforest |
| 135 | dispara - setembro - agosto - em - na | 227 | Deforestation in the Amazon |
| 136 | varia - km - pio - destaque - representando | 221 | Deforestation in Brazilian states |
| 137 | ranking - lidera - par - estado - grosso | 220 | Deforestation rankings in the Amazon states |
| 138 | prayforamazonia - peti - assine - impedir - explora | 217 | Protecting the Amazon Rainforest |
| 139 | fires - fire - amazon - burning - amazonia | 215 | Deforestation in the Amazon through fires |
| 140 | dezembro - dev - disparou - cresce - rela | 214 | Deforestation in the Amazon in December |
| 141 | amazonia - sosamazonia - nfts - aroma - cryptoart | 213 | Deforestation and CryptoArt in Amazonia |
| 142 | sobem - seis - alertas - meses - legal | 209 | Deforestation alerts in Brazilian Amazon increase in six months |
| 143 | estadoimagens - cai - legal - ano - ag | 207 | Deforestation in the Amazon |
| 144 | organizado - crime - raquel - dodge - respons | 206 | Organized Crime and Deforestation in the Amazon |
| 145 | sights - sets - danger - palm - oil | 206 | Amazon rainforest under threat from palm oil expansion in Brazil |
| 146 | aumenta - ano - um - em - apenas | 205 | Deforestation in the Amazon increases in one year |
| 147 | fake - news - fakenews - falsa - mentiras | 205 | Fake News in the Amazon |
| 148 | peixes - riachos - encolhe - esquenta - pesca | 200 | Impact of deforestation on fish size in Amazon rivers |
| 149 | cidade - desmatada - cresce - segundo - setemb | 195 | Deforestation in the Amazon grows in a year |
| 150 | partner - bnpparibas - number - burning - world | 194 | Deforestation partnership with BNPP |
| 151 | imagens - fotos - foto - fotogr - mostram | 190 | Deforestation in the Amazon Rainforest |
| 152 | please - sign - petition - this - help | 190 | Signature campaign for Amazon rainforest conservation |
| 153 | fundo - projetos - reembols - doa - captar | 188 | Fundraising for Sustainable Forest Conservation |
| 154 | leonardo - dicaprio - denuncia - avan - nchen | 187 | Leonardo DiCaprio speaks out against Amazon deforestation |
| 155 | desmascara - petista - ticos - imprensa - deo | 187 | Desmantling of Amazonian deforestation exposed by former Petista government minister |
| 156 | tamanho - leia - sp - quase - legal | 187 | Deforestation in the Amazon |
| 157 | biden - joe - san - micas - eleito | 186 | Joe Biden speaks out against deforestation in the Amazon |
| 158 | julho - passado - cresce - rela - cresceu | 185 | Deforestation in the Amazon grows in July compared to previous years |
| 159 | horizonte - belo - duas - equivalente - perdeu | 184 | Deforestation in June in Amazonia |
| 160 | volta - crescer - subir - voltou - envia | 181 | Deforestation in the Amazon |
| 161 | argentina - comprova - afeta - chuvas - estudo | 181 | Deforestation in Argentina affects rainfall |
| 162 | aumenta - na - taubate - transmaz - durant | 179 | Deforestation in the Amazon region |
| 163 | acabei - juntos - mp - corrup - dizer | 179 | Save the Amazon |
| 164 | faster - burns - another - destroyed - part | 176 | Amazon rainforest destruction |
| 165 | ue - exporta - uni - science - exportadas | 176 | Illegal Deforestation and Exports of Beef and Soy from Brazil |
| 166 | divulga - desmate - alta - agora - governo | 175 | Government announces high desertion rate in Amazon |
| 167 | przez - lewat - tandatangani - petisi - petizione | 173 | Petition to protect the Amazon rainforest |
| 168 | feita - aponta - quadrados - junho - minist | 173 | Deforestation in the Amazon |
| 169 | puta - vc - pariu - merda - filho | 172 | Deforestation in the Amazon and hypocrisy |
| 170 | polui - rica - sul - am - ses | 171 | Deforestation in the Amazon region increases pollution in southern Brazil |
| 171 | gases - estufa - emiss - efeito - emissor | 170 | Deforestation and greenhouse gas emissions in Brazil |
| 172 | seguido - consecutivo - cresce - segundo - operacaobetalab | 170 | Deforestation in the Amazon grows for the second year in a row |
| 173 | boletim - setembro - alta - aumenta - imazon | 168 | Deforestation in the Amazon |
| 174 | found - massive - brazil - discovered - region | 168 | Deforestation in Brazil's Cerrado region |
| 175 | desafiam - madeireiros - ilegais - combate - bbcembora | 166 | Illegal logging in the Amazon |
| 176 | petizione - firma - la - impedir - explora | 165 | Preventing Deforestation in the Amazon |
| 177 | chicken - fed - linked - soya - fast | 164 | Deforestation and Chicken Supply |
| 178 | alertas - ag - afetada - coloca - crescem | 163 | Deforestation alerts in Brazil |
| 179 | ritmo - anuncia - lees - reutersmaranh - esfriassem | 162 | Deforestation and its impact on global warming |
| 180 | ciclo - entrar - seca - mortal - pode | 161 | Ciclo de desmatamento e seca na Amazônia |
| 181 | novembro - aumenta - mensal - quebra - compara | 160 | Deforestation in Brazil's Amazon region increases in November |
| 182 | ouro - minera - mining - gold - causado | 160 | Illegal gold mining causes deforestation in the Amazon |
| 183 | chuvas - distantes - afeta - continente - sudeste | 159 | Impact of deforestation on rainfall in distant regions |
| 184 | endossar - depender - fran - presidente - diz | 159 | Brazilian President's Stance on Soy Dependence and Deforestation |
| 185 | registrada - taxa - menor - info - fb | 158 | Deforestation rates in the Amazon |
| 186 | futebol - campos - equivalente - minuto - mil | 157 | Deforestation in the Amazon equivalent to over 100 football fields in May |
| 187 | co - emiss - este - cai - ano | 157 | Deforestation in the Amazon |
| 188 | balan - confirmam - oficiais - generative - sistema | 157 | Blockchain-based art platform |
| 189 | privada - empresa - contratar - monitorar - edital | 157 | Government to contract private company for Amazon deforestation monitoring |
| 190 | meses - cai - rbr - em - poupados | 157 | Deforestation in the Amazon |
| 191 | aumenta - aumentou - brasil - brasileira - ano | 156 | Deforestation in Brazil's Amazon region increases in one year |
| 192 | dinheiro - fundo - financia - financiar - financiam | 155 | Financing efforts to combat Amazon deforestation |
| 193 | americana - universidade - dobro - registrado - seria | 155 | Deforestation in the Amazon doubled according to university study |
| 194 | escravo - trabalho - mpt - bomba - chefe | 154 | "Escravidão e exploração na Amazônia" |
| 195 | menor - ndice - registra - taxa - tem | 154 | Legal indices of deforestation in Brazil |
| 196 | desmentem - autores - citado - temer - onu | 153 | Desmentem sobre queda na Amazônia |
| 197 | indireto - impulsionam - tricas - hidrel - bbcfloresta | 153 | Indirect Impacts of Hydroelectric Power Plants on the Amazon Rainforest |
| 198 | stopamazonextraction - indigenouspeoplematter - amazoniasinextraccion - indigenouslivesmatter - justicefortheamazon | 152 | Protecting Indigenous Rights in the Amazon |
| 199 | quina - move - multim - retorna - linha | 152 | Agricultural machinery in the Amazon rainforest |
| 200 | perde - hora - hectares - avan - recorrente | 152 | Deforestation in the Amazon |
| 201 | schwarzman - steve - donald - apoiador - impulsiona | 151 | Steve Schwarzman's support for Donald Trump's policies |
| 202 | julho - detectou - derrubadas - menor - re | 151 | Deforestation in the Amazon in July |
| 203 | iluminattis - congelada - observar - geleiras - fossem | 151 | Global Warming Hoax: The Truth Behind Deforestation |
| 204 | tecnologias - tecnologia - ajudam - vigil - combatem | 150 | Technologies for forest monitoring and conservation in the Amazon |
| 205 | meta - reduzir - metas - emiss - brasil | 150 | Brazil's efforts to reduce carbon emissions |
| 206 | pedir - desculpa - trouxa - meus - quero | 150 | Requesting apologies for paper waste |
| 207 | impunidade - crime - humanidade - crimes - rights | 150 | Human Rights Violations and Impunity in Amazonian Deforestation |
| 208 | atrasada - reformar - lbum - fotos - veja | 149 | Reforming an outdated cattle farm in Amazonia |
| 209 | indicam - setembro - alertas - cresce - inpe | 149 | Deforestation in the Amazon increases in September, according to INPE alerts |
| 210 | anuncia - tev - menor - desde - indicam | 148 | Government announces reduced deforestation in Amazon since data shows slower rate of forest destruction |
| 211 | af - comprova - argentina - afeta - perda | 147 | Deforestation in the Amazon affects rainfall in Argentina |
| 212 | desemprego - desempregados - infla - gasolina - educa | 146 | Unemployment and environmental issues in Brazil |
| 213 | del - comerciante - beneficiarse - degradaci - masiva | 146 | Environmental damage and human rights abuses in Amazonian soy and meat production |
| 214 | atingiu - outubro - quil - metros - quadrados | 145 | Deforestation in the Amazon reaches 10,000 square kilometers in October |
| 215 | sacrificar - eleva - demanda - press - crescimento | 145 | Impact of meat and soy demand on Amazonian growth |
| 216 | lib - toneladas - co - emiss - milh | 143 | Deforestation in the Amazon |
| 217 | acumula - mil - estudo - km - anos | 143 | Deforestation in the Amazon |
| 218 | corrup - delatorias - promiscuas - menosprezados - cohen | 143 | Corruption and social issues in Brazil |
| 219 | bacia - campe - xingu - desflorestamento - criam | 142 | Deforestation in the Xingu River Basin |
| 220 | provocou - tamanho - sp - leia - rt | 142 | Deforestation in the Amazon |
| 221 | escudos - xingu - dispara - ltimos - dos | 142 | Deforestation in Xingu National Park |
| 222 | liga - seca - novo - estudo - pa | 142 | Amazonian Deforestation and Drought |
| 223 | unterschreiben - jetzt - impedir - explora - this | 142 | Stop Deforestation in the Amazon |
| 224 | evita - redu - mortes - higi - aeronaves | 142 | Airborne reduction of forest fires in the Amazon avoids annual deaths |
| 225 | petition - sign - assinem - prin - the | 142 | Stop Deforestation in the Amazon |
| 226 | expedi - viagem - realiza - protestar - lincolnte | 141 | Protest against deforestation in the Amazon |
| 227 | moon - ki - ban - mundial - quest | 140 | Global forest degradation |
| 228 | antibi - resistentes - bact - eros - frigor | 139 | Antibiotic resistance in the Amazon rainforest |
| 229 | denunciada - mineradora - dona - contamina - criticar | 139 | Mineradora noruega denunciada por contaminación |
| 230 | assinatura - focos - ndio - nasa - inc | 139 | Indigenous rights and environmental issues in Brazil |
| 231 | needed - preserve - still - shows - study | 138 | Brazil's Soy Moratorium: Preserving the Amazon |
| 232 | radar - vigiar - vai - enfrenta - ganh | 138 | Use of radar technology in Amazon deforestation monitoring |
| 233 | sinais - voltar - crescer - reutersong - preliminares | 137 | Deforestation in the Amazon: Signs of growth and recovery |
| 234 | climatechange - environment - brazil - amazonrainforest - deforestation | 137 | Deforestation in the Amazon and its impact on climate change |
| 235 | tchausalles - brasilpedesocorro - ecossistemabrasileiro - terrasp - desmatamentonaamaz | 137 | Deforestation and its impact on the Brazilian ecosystem |
| 236 | tribunal - penal - humanidade - haia - apresentada | 137 | Environmental crimes against humanity |
| 237 | baleias - baleia - matan - noruega - ca | 137 | Hypocrisy in environmental policies: Norway's whaling practices vs. deforestation in the Amazon |
| 238 | menores - ocorre - cerca - reduzir - depois | 136 | Deforestation in Brazil |
| 239 | entrega - monitorar - lite - resposta - sat | 135 | Brazilian government's use of satellite technology to monitor deforestation |
| 240 | aposta - tribo - google - combater - usu | 135 | Combating deforestation through Google Tribe |
| 241 | janeiro - queda - tem - diz - governo | 135 | Deforestation in Brazil in January, according to government reports |
| 242 | drones - drone - estudante - microchips - provar | 135 | Use of drones in combating deforestation in the Amazon |
| 243 | mapa - interativo - mapas - infoamazonia - atualizado | 135 | Real-time Amazon deforestation maps |
| 244 | verba - monitorar - falta - estad - sustentabilidade | 135 | Lack of monitoring resources for deforestation in the Cerrado region |
| 245 | advancing - track - researchers - frontier - agricultural | 135 | Impact of Brazil's Soy Moratorium on Advancing Agricultural Frontiers |
| 246 | tamanho - leia - sp - quase - legal | 134 | Deforestation in the Amazon |
| 247 | televan - registrar - tvonline - volta - aumento | 134 | Deforestation in the Amazon: Legal and Online Issues |
| 248 | sacas - colheita - rr - fecha - bbb | 134 | Agricultural production in Brazil |
| 249 | junho - cai - imazon - caiu - queda | 134 | Deforestation in Amazon in June |
| 250 | atingir - limite - prestes - irrevers - determinado | 134 | Deforestation in the Amazon nearing irreversible limit |
| 251 | barram - supermercados - brasileiros - suspendem - carne | 133 | Brazilian supermarkets suspend beef purchases due to Amazon deforestation |
| 252 | palm - oil - palmoil - danger - sights | 133 | Amazon rainforest under threat from Brazil's palm oil ambition |
| 253 | stonehenge - misterioso - revela - disp - pedras | 133 | Mysterious Stonehenge-like structure discovered in Amazon rainforest |
| 254 | demanda - press - externa - carnes - eleva | 132 | Demand for meat and soy products affects Amazonian deforestation |
| 255 | tamanho - leia - sp - quase - legal | 132 | Deforestation in the Amazon |
| 256 | ganha - realidade - prote - dt - florestas | 131 | Deforestation alerts in Brazil |
| 257 | du - forestation - politiques - financements - dites | 131 | Political hypocrisy in forestation policies |
| 258 | metade - pela - cai - reduziu - quase | 131 | Deforestation in the Amazon reduces by half |
| 259 | dispara - coletados - neste - chega - ro | 129 | Deforestation in the Amazon |
| 260 | atingiu - desmatada - julho - agosto - cai | 129 | Deforestation in the Amazon |
| 261 | fontes - fonte - vaivendo - source - checadas | 129 | References and sources |
| 262 | eleitoral - explode - durante - odo - per | 129 | Electoral Explosion in the Amazon |
| 263 | afetar - fim - fiscaliza - fundo - ibama | 129 | Fiscalization of Ibama against deforestation |
| 264 | prayforamazonia - prayforbrazil - prayforrondonia - prayforamazonas - saveamazonia | 128 | Protecting the Amazon Rainforest |
| 265 | dicaprio - leonardo - desafio - adere - denuncia | 128 | Leonardo DiCaprio speaks out against deforestation in the Amazon |
| 266 | mar - sobe - ecoc - ong - bate | 128 | Deforestation in the Amazon |
| 267 | possui - fun - metodologia - causas - especialistas | 127 | Causes of drought in the Amazon forest |
| 268 | afetou - degradadas - desmate - mt - sacas | 127 | Affected areas of Amazonian deforestation |
| 269 | girafas - girafa - elefantes - sobra - comem | 127 | Elephants and soja in Amazonia |
| 270 | satiriza - televis - humor - alem - programa | 126 | Satire of Brazilian government's environmental policies on German TV |
| 271 | impeachmentsalvavidas - flavio - abin - loteamento - interfer | 126 | Impeachment and political corruption in Brazil |
| 272 | mensal - agosto - instituto - natureza - abril | 125 | Deforestation in Brazil's Amazon region |
| 273 | interrompe - curva - fora - intensifica - ibge | 125 | Deforestation in the Amazon beyond control |
| 274 | coronavirus - diseases - infectious - next - commentary | 125 | Risk of Infectious Diseases from Amazonian Deforestation |
| 275 | carbon - source - change - climate - linked | 124 | Amazon rainforest as a carbon source |
| 276 | metas - aprovados - apresenta - planos - plano | 123 | Government plans to reduce deforestation in the Amazon |
| 277 | novembro - imazon - monito - alta - refere | 122 | Deforestation in the Amazon in November |
| 278 | nima - rcio - criminoso - amea - sociedade | 122 | Deforestation in the Amazon region |
| 279 | macaco - primatas - barrigudo - mico - extin | 121 | Deforestation and its impact on primates |
| 280 | pecu - causa - ria - folha - diz | 120 | Causes of deforestation in the Amazon |
| 281 | prayforamazonia - prayforamazonas - prayforamazon - peti - assine | 120 | Protecting the Amazon Rainforest |
| 282 | zerar - conseguiu - desmata - diminuir - quer | 120 | Government efforts to reduce illegal deforestation in the Amazon |
| 283 | incra - coloni - assentamentos - promete - diminuir | 120 | INCRAPrometesDiminuirEmDesmatamento |
| 284 | mar - abril - aletas - cai - entre | 120 | Deforestation in the Amazon |
| 285 | quatro - volta - ap - crescer - queda | 120 | Deforestation in the Amazon |
| 286 | isolado - perde - doa - ajuda - analistas | 119 | Deforestation in Brazil |
| 287 | drica - crise - agravam - especialistas - clim | 119 | Climate crisis and energy issues in the Amazon region |
| 288 | segundo - imazon - cresce - ano - um | 119 | Deforestation in the Amazon grows in a year according to IMazon |
| 289 | panelacoforabolsonaro - hs - impeachmentsalvavidas - flavio - abin | 119 | Political scandals and corruption in Brazil |
| 290 | armyhelptheplanet - playforamazonia - peti - assine - impedir | 119 | Conservation efforts in the Amazon rainforest |
| 291 | vestiram - militantes - frica - protesto - greenpeace | 118 | Protest against deforestation in the Amazon |
| 292 | dobra - dobrou - quintuplicou - quase - janeiro | 118 | Deforestation in the Amazon almost doubled in a year |
| 293 | firmam - firmar - incra - assentamentos - assinam | 118 | MPF and INCRAP sign agreement to reduce deforestation in Amazonian settlements |
| 294 | liber - concordou - noruega - pagar - mi | 118 | Norway agrees to pay Brazil more than $100 million for Amazon deforestation reduction |
| 295 | proibir - deflagra - buscam - civil - pf | 118 | Prohibition of Deforestation in the Amazon |
| 296 | monitoramento - ministra - nova - acende - mostra | 118 | Minister's statement on Amazon deforestation reduction |
| 297 | firm - trav - petici - la - impedir | 117 | Preventing deforestation in the Amazon |
| 298 | amazoniasos - prayforamazonas - amazoniaemchamas - prayforamazonia - amazonrainforest | 116 | Protecting the Amazon Rainforest |
| 299 | cinco - vezes - perdeu - maio - novembro | 116 | Deforestation in the Amazon |
| 300 | previa - apoiou - milhares - protegidas - cies | 116 | Macron's support for Amazon deforestation project |
| 301 | dispara - setembro - agosto - segurou - folha | 116 | Deforestation in the Amazon |
| 302 | caiu - odo - per - legal - cai | 115 | Deforestation in the Amazon legal in one year |
| 303 | renovada - morat - ria - prorrogada - maio | 115 | Renewal of soy moratorium extended for another year |
| 304 | triplicar - dizem - cientistas - pode - bolsonaro | 114 | Deforestation in the Amazon under Bolsonaro's presidency |
| 305 | federal - cia - busca - combate - deflagrou | 113 | Combate à desmatamento ilegal na Amazônia |
| 306 | abastecimento - afetar - diminui - planeta - gua | 113 | Water resource management and climate change |
| 307 | impedir - explora - via - da - nia | 113 | Preventing deforestation in the Amazon |
| 308 | partner - bankofamerica - bofa - partenaire - number | 113 | Deforestation partnership with Bank of America |
| 309 | chuvas - seca - relacionada - escassez - drought | 113 | Impact of Drought on Brazil |
| 310 | emergenciaclim - sostenibilidad - aloja - siglo - palmadeaceite | 112 | Amazon Rainforest Sustainability |
| 311 | sad - boletim - imazon - dispon - lisedem | 112 | Monthly Report on Deforestation |
| 312 | multado - flagrante - ms - pma - produtor | 112 | "Produtor multado em R$ mil por desmatamento de Cerrado em MS flagrante" |
| 313 | agosto - aumentou - aumenta - ltimo - paulo | 112 | Deforestation in Brazil increases in August |
| 314 | roraima - rionorte - hemisf - amelhorfronteiraagr - coladobrasil | 112 | Agricultural practices in Roraima |
| 315 | sos - fiscais - liga - estudo - compartilhar | 111 | Deforestation in the Amazon |
| 316 | winning - war - saving - bbc - news | 111 | Deforestation in Amazonia: BBC News Coverage |
| 317 | latifundi - sputnik - garimpos - rt - rou | 111 | Environmental policies and land use in Brazil |
| 318 | orbital - radar - melhorar - novo - fiscaliza | 110 | Monitoring deforestation with new radar technology |
| 319 | repassa - prev - us - noruega - verba | 110 | Norwegian fund repays USD millions to Brazil due to forest degradation |
| 320 | amazonorbolsonaro - amazoniaoubolsonaro - amazonia - detour - cup | 109 | Bolsonaro's Amazon policies |
| 321 | caem - alertas - legal - clipping - ram | 109 | Deforestation alerts in Brazilian Amazon |
| 322 | celulares - antigos - territorial - disputa - discute | 109 | Google uses old smartphones to monitor deforestation in the Amazon |
| 323 | motivos - futuro - ado - dispara - entenda | 109 | Motivations for deforestation in the Amazon |
| 324 | julho - agosto - cai - entre - legal | 109 | Deforestation in the Amazon: Legal and Environmental Impacts |
| 325 | irrepar - recupera - especialista - perda - ambiental | 108 | Deforestation in the Amazon: Expertise in Recovery and Loss |
| 326 | seletivo - detecta - crescimento - minist - meio | 108 | Deforestation in the Amazon |
| 327 | ilegal - bnc - isolados - nero - ltima | 107 | Deforestation in the Amazon |
| 328 | refer - retorno - pesquisador - ponto - atual | 107 | Climate Change Impacts on Marine Ecosystems |
| 329 | dez - agosto - maior - imazon - atinge | 107 | Deforestation in Amazon reaches 10-year high in August |
| 330 | ditaduranuncamais - porcento - jairbolsonaro - impeachmentdebolsonaro - somos | 107 | Impeachment of Jair Bolsonaro and environmental policies |
| 331 | atinge - taxa - menor - cai - anos | 107 | Deforestation in the Amazon reduces tax rate in recent years |
| 332 | publico - incra - inspecionar - minist - federal | 107 | Brazilian government agency responsible for deforestation in the Amazon |
| 333 | seca - sudeste - causada - relaciona - sul | 106 | Causes of drought in the southeast |
| 334 | carv - coibir - pacto - objetivo - empresas | 106 | Illegal Deforestation in the Cerrado |
| 335 | presidenta - destaca - oito - mpf - amento | 106 | Dilma Rousseff highlights decrease in Amazon deforestation |
| 336 | contesta - den - incra - respons - coloniza | 106 | Responsibility for deforestation in the Amazon |
| 337 | criminosas - redes - comandado - hrw - impulsionam | 106 | Human rights abuses in Amazonian regions controlled by criminal networks |
| 338 | gabinete - crise - anuncia - ministra - disparada | 106 | Brazilian government's response to deforestation in the Amazon |
| 339 | cresce - ano - um - ebc - quase | 106 | Deforestation in the Amazon grows in a year |
| 340 | madeira - compram - ses - criticam - compra | 105 | Illegal logging and corruption in Brazil |
| 341 | parlamento - holanda - mercosul - holand - rejeita | 105 | Rejection of Mercosur agreement by Dutch parliament due to Brazilian government's Amazon deforestation |
| 342 | intensifica - fiscaliza - ibama - combater - smasher | 105 | IBAMa's efforts to combat illegal deforestation in the Amazon |
| 343 | comparativo - extens - imagem - revela - nasa | 105 | Deforestation in the Amazon, as revealed by NASA's historical images |
| 344 | registra - maio - hist - taxa - ria | 104 | Deforestation in May |
| 345 | zoios - capela - abestado - abre - prepara | 104 | Preparation of a Capela (Chapel) for Zoios (Ancestral Spirits) |
| 346 | seguido - maior - anos - desde - giro | 103 | Deforestation in the Amazon |
| 347 | ten - leva - apresenta - outubro - taxa | 103 | Deforestation in Brazil |
| 348 | esperar - devem - ministra - desmatada - queda | 103 | Environmental policy in Brazil |
| 349 | rie - rica - menor - agosto - hist | 103 | Deforestation in the Amazon since historical records |
| 350 | culpa - culpados - culpado - culpar - pelo | 103 | Culpa por desmatamento da Amazônia |
| 351 | futebol - campos - perdeu - sob - dia | 102 | Loss of football fields under Bolsonaro's administration |
| 352 | crescem - alertas - legal - filme - tvonline | 102 | Deforestation alerts in Brazilian Amazon |
| 353 | presidenta - rousseff - destaca - espac - confirma | 102 | Brazilian President Dilma Rousseff addresses deforestation in the Amazon |
| 354 | pas - compara - julho - segundo - imazon | 102 | Deforestation in the Amazon in July |
| 355 | dobra - recursos - combater - salles - dobrar | 101 | Government announces measures to combat deforestation in the Amazon |
| 356 | outubro - caiu - ministra - compara - passado | 101 | Deforestation in the Amazon in October |
| 357 | registram - inferior - afirma - pantanal - queda | 101 | Deforestation in Brazil's Cerrado and Pantanal regions |
| 358 | sobe - natureza - meses - ltimos - ong | 101 | Deforestation in the Amazon |
| 359 | afirma - avan - imazon - perseu - abramo | 101 | Brazilian deforestation |
| 360 | mar - aumentou - monitoramento - imazon - comparado | 101 | Deforestation in the Amazon |
| 361 | bate - meses - recorde - cresce - em | 101 | Deforestation in the Amazon hits record high in months |
| 362 | subiu - alerta - ong - brasileira - cam | 101 | Deforestation in Brazilian Amazon raises alarm from NGOs |
| 363 | odo - junho - per - rela - quase | 100 | Deforestation in the Amazon in June |
| 364 | desapropria - disputa - dallagnol - latif - olho | 100 | Land disputes in the Amazon |
| 365 | servidores - nota - ibama - cresce - estimam | 100 | Deforestation in the Amazon grows in one year, according to IBAM |
| 366 | derrubado - deste - registra - foram - julho | 100 | Deforestation in Brazil |
| 367 | anuncia - menor - novas - garraseguros - registra | 99 | Government announces lower deforestation rate in Amazon with new conservation units |
| 368 | reutersentre - ministra - hist - menor - ria | 99 | Deforestation in Brazil's Amazon region |
| 369 | ong - alerta - brasileira - subiu - sobe | 99 | Deforestation in Brazilian Amazon raises alarm from NGOs |
| 370 | greenpeace - game - vers - apresenta - nova | 98 | Greenpeace's campaign against Amazon deforestation |
| 371 | reduziram - lula - devasta - enquanto - dilma | 98 | Deforestation under Bolsonaro's presidency |
| 372 | janeiro - aumenta - imazon - cresce - eco | 98 | Deforestation in Brazil's Amazon region in January |
| 373 | registrado - menor - legal - diz - natureza | 98 | Deforestation in the Amazon: Legal or Illegal? |
| 374 | chega - setembro - aumenta - km - quil | 98 | Deforestation in the Amazon increases and reaches km in September |
| 375 | girafa - girafas - elefantes - desenhou - pintou | 97 | Deforestation in the Pantanal region |
| 376 | julho - supera - cai - compara - anual | 97 | Deforestation in the Amazon in July surpasses previous years |
| 377 | condol - rostos - solidariedade - artista - venho | 97 | Deforestation in the Amazon |
| 378 | trampascontraelclima - suministro - fabricaci - cadena - llevas | 97 | Supply chain management of soy-based animal feed and its impact on the Amazon rainforest |
| 379 | macronfake - macronliar - fundoeleitoralpraamazonia - campeign - somostodosricardosalles | 97 | "Fighting Fake Macron and Illegal Deforestation in the Amazon" |
| 380 | rela - agosto - cresce - mesmo - ao | 97 | Deforestation in the Amazon in August increases |
| 381 | quil - metros - perdeu - mar - adormecido | 96 | Deforestation in the Amazon |
| 382 | cresce - desde - maior - patamar - devastados | 96 | Deforestation in the Amazon |
| 383 | este - maior - aponta - desde - uol | 96 | Deforestation in the Amazon this year reaches record high |
| 384 | tamanho - sp - quase - foi - legal | 96 | Deforestation in Brazil |
| 385 | gio - ximo - atingir - emiss - philip | 96 | Deforestation in the Amazon |
| 386 | cacau - arma - vira - introduz - choco | 96 | Xingu Indigenous Land and Cocoa Production |
| 387 | inflama - retornou - desemprego - economias - juros | 96 | Economic and social impacts of Bolsonaro's government in Brazil |
| 388 | comercializar - renovada - produzida - desmatamentos - compromisso | 95 | Commercialization of new soy productions and deforestation |
| 389 | oito - alem - caiu - estudo - anos | 95 | Deforestation in the Amazon reduced by 8 years according to study |
| 390 | bulldozed - changed - everything - then - ago | 95 | Deforestation of the Amazon |
| 391 | obrasilfelizdenovo - oambiental - agroneg - prote - cio | 95 | Brazilian researcher Eduardo Braga's work on environmental protection |
| 392 | timesde - arqueol - descobertas - desenhos - gicas | 95 | Archaeological discoveries in the Amazon rainforest |
| 393 | perdem - deveriam - cadas - cidades - desafio | 95 | Deforestation challenges in Brazil |
| 394 | antipetista - cruzada - estimula - jn - respondem | 95 | Antipetista Cruzada: Desmatamento na Amazônia |
| 395 | fhc - argumentos - mil - amazoniasemongs - vejam | 95 | Desmatamento da Amazônia e respostas dos governos |
| 396 | meses - cai - perdas - conseguiu - dep | 94 | Deforestation in the Amazon |
| 397 | registrada - derrubada - bras - rvores - lia | 94 | Deforestation in Brazil |
| 398 | desmatam - registrada - levantamento - taxa - anual | 94 | Deforestation rates in the Amazon |
| 399 | comprovamos - governos - pt - cresceu - marcelo | 94 | Deforestation in Brazil under President Marcelo's governments |
| 400 | caiu - ibge - motivado - levantamento - desapareceu | 94 | Deforestation in the Amazon |
| 401 | sou - dosbrasileirosedoplanetaterralongedeserdestegoverno - arquivamaia - aamaz - nativa | 94 | Protection of Native Forests |
| 402 | microsoft - artificial - intelig - previsia - plataforma | 93 | Monitoring and prevention of Amazon deforestation using AI technology |
| 403 | igual - perda - mata - sobe - ong | 93 | Deforestation in the Amazon |
| 404 | megaopera - ibama - realiza - opera - inicia | 93 | "IBAMA's Megaoperation Against Illegal Deforestation in the Amazon" |
| 405 | messias - zeraria - rep - concretas - blica | 93 | Bolsonaro's stance on deforestation in the Amazon |
| 406 | account - unavailable - temporarily - violates - learn | 93 | Twitter media policy violations |
| 407 | seguran - nacional - combater - for - ativid | 93 | Combating illegal deforestation in the Amazon |
| 408 | retalia - mulo - corta - verba - alemanha | 92 | Investment and property disputes in Brazil |
| 409 | heleno - augusto - manipulados - ndices - hackeado | 92 | Political scandal involving Minister Augusto Heleno |
| 410 | explode - entre - goveerno - msnbrasil - explodiram | 92 | Deforestation in Brazilian Amazon |
| 411 | filme - tvonline - tv - co - emiss | 92 | Deforestation in the Amazon |
| 412 | diminuiu - ltima - cada - legal - ebc | 91 | Deforestation in Brazil |
| 413 | atrapalhar - acusam - fiscais - militares - combate | 91 | Military involvement in deforestation |
| 414 | odo - junho - per - mesmo - maior | 91 | Deforestation in the Amazon in June |
| 415 | justa - saia - doador - visita - temer | 91 | Deforestation in Norway |
| 416 | evita - mortes - redu - mil - diminui | 91 | Preservation of Amazonian rainforest saves lives |
| 417 | impedir - explora - prin - forasalles - forabolsonarogenocida | 91 | Preventing deforestation in the Amazon |
| 418 | financiar - acusa - lula - ong - bbc | 90 | Lula government accused of indirectly financing Amazon deforestation through BNDES |
| 419 | retardo - tratora - prevarica - precoce - tratamento | 90 | "Government Incompetence in Healthcare: Bolsonaro's Impact" |
| 420 | abril - aponta - cai - entre - junho | 90 | Deforestation in the Amazon, April to June |
| 421 | outubro - comparado - cai - novembro - mesmo | 90 | Deforestation in the Amazon in October |
| 422 | motivos - preservar - florestas - digoflorestal - greenpeacebr | 90 | Preservation of Brazilian Forests |
| 423 | cresce - ano - cresceu - um - videversus | 90 | Deforestation in Brazil |
| 424 | perde - desde - km - maior - floresta | 90 | Deforestation in the Amazon |
| 425 | alegre - crescem - preocupante - porto - deixou | 89 | Deforestation in the Amazon region |
| 426 | dinossauros - corriam - puro - estrelas - brilhavam | 89 | Lost paradise of a pristine past |
| 427 | chega - quarto - aumenta - devastada - km | 89 | Deforestation in the Amazon |
| 428 | perdem - rj - estados - pantanal - queimadas | 89 | Deforestation in the Pantanal region of Rio de Janeiro |
| 429 | cerveja - colorado - pre - barril - varia | 89 | Cerveja Colorado: Preço varia com desmatamento |
| 430 | office - grileiro - home - abril - triplicar | 89 | Deforestation in the Amazon grows in April, grileiro does not work from home office |
| 431 | apoiou - previa - macron - milhares - protegidas | 88 | Deforestation in the Amazon under Macron's presidency |
| 432 | regra - reserva - conflict - zones - minc | 88 | Brazilian forest reserve regulations and conflicts |
| 433 | vale - plantada - pesquisador - lula - biodiversidade | 88 | Biodiversity and Soja Plantations in the Amazon |
| 434 | ppcdam - preven - plano - controle - frederico | 88 | PPCDAM Plan for Preventing and Controlling Deforestation in the Amazon |
| 435 | armyhelptheplanet - petition - sign - let - save | 87 | Save the Amazon Rainforest |
| 436 | imprensa - jornal - cobre - jornalistas - jornalismo | 87 | Media coverage of deforestation in the Amazon |
| 437 | escudos - xingu - principais - dispara - dos | 87 | Deforestation in Xingu: Shield of the Amazon |
| 438 | deforestation - amazonia - choc - dari - incluyen | 86 | Deforestation in the Amazon |
| 439 | continua - segue - vapor - lideres - enquanto | 86 | Deforestation in the Amazon continues |
| 440 | scandal - bragging - disappearance - while - massive | 86 | Environmental Scandals - BNP and Deforestation |
| 441 | setembro - feedbrasil - governa - realizado - agosto | 86 | Deforestation in Brazil's Amazon region |
| 442 | estuda - sico - bbc - desmonte - levar | 86 | Deforestation in Brazil under Bolsonaro's government |
| 443 | tamanho - sp - quase - cias - legal | 86 | Deforestation in the Amazon |
| 444 | espaciais - pesquisas - instituto - nacional - cresceu | 86 | Deforestation rates in Brazil's Amazon region |
| 445 | mostram - sobe - dados - inpe - prodes | 85 | Deforestation in Brazilian Amazon |
| 446 | lbum - equivale - lise - fotos - vezes | 85 | Deforestation in the Amazon |
| 447 | boicote - plantio - atualizada - empresas - aumenta | 85 | Soya plantation in Amazon rainforest despite company boycotts |
| 448 | berlim - frica - protesto - protestam - esfor | 85 | Protests against deforestation in the Amazon |
| 449 | daehyun - dilmacadeodinheirodopovo - eptv - hithotbr - ka | 84 | Deforestation in the Amazon |
| 450 | menor - taxa - registra - atinge - estadao | 84 | Brazil's deforestation rate reaches record low |
| 451 | sofreram - degrada - fica - ong - mil | 84 | Deforestation in the Amazon |
| 452 | repress - seguran - blica - autoriza - nacional | 84 | Combating Illegal Deforestation |
| 453 | combatem - internet - usando - ndios - notebooks | 84 | Indigenous use of the internet to combat deforestation in the Amazon |
| 454 | pequenas - lbum - propriedades - tornar - fotos | 83 | Decorating Small Properties with Sustainable Materials |
| 455 | abandonar - macron - salvar - europa - precisa | 83 | Macron urges Europe to abandon Brazilian soy to save the Amazon |
| 456 | nova - ong - alta - aponta - no | 83 | Deforestation in the Amazon |
| 457 | oficiais - temperatures - apontam - biologically - amazaon | 83 | Deforestation in Brazil's Amazon region affecting temperature levels |
| 458 | raquel - organizado - afirma - crime - respons | 83 | Organized Crime and Deforestation in the Amazon |
| 459 | fase - controle - plano - nova - este | 82 | Planned deforestation control in the Amazon |
| 460 | peasant - economy - amazonian - registered - property | 82 | Illegal land ownership in the Amazon |
| 461 | tank - went - fields - water - vital | 82 | Deforestation and water scarcity in Brazil's Cerrado region |
| 462 | detectou - sad - quadrados - quil - metros | 82 | Deforestation detection in the Amazon |
| 463 | checa - confere - vs - uol - nasa | 82 | Deforestation in Brazil under Bolsonaro's government |
| 464 | calor - extremo - expor - brasileiros - milh | 81 | Deforestation in the Amazon exposes Brazilians to extreme heat |
| 465 | imagem - revela - nasa - divulga - rica | 81 | Deforestation in the Amazon revealed through historical images |
| 466 | relaciona - pesquisador - seca - cientista - com | 81 | Impact of drought on Amazonian deforestation |
| 467 | rec - filme - tvonline - cultivo - desmatadas | 81 | Agricultural expansion in the Amazon rainforest |
| 468 | alcan - taxa - menor - cai - anos | 80 | Deforestation in the Amazon and tax rates |
| 469 | bate - recorde - brasileira - atinge - abril | 80 | Deforestation in Brazil reaches record high in April |
| 470 | assentamentos - promete - incra - diminuir - nas | 80 | Incra's Promises to Reduce Deforestation in Amazonian Legal Areas |
| 471 | detecta - imazon - agosto - aumento - agencia | 80 | Deforestation in the Amazon in August |
| 472 | garraseguros - tipos - xic - alcantra - brida | 80 | Economic and military strategies for maintaining power and wealth in the world, with a focus on Brazil and China. |
| 473 | agosto - cresce - rela - deste - cresceu | 79 | Deforestation in the Amazon in August |
| 474 | ministra - ditos - minc - anuncia - izabella | 79 | Deforestation in the Amazon |
| 475 | casino - driblam - supermercados - grupo - restri | 79 | Pecuaristas driblam restrições de supermercados |
| 476 | distantes - afeta - chuvas - ses - estudo | 79 | Deforestation in Alagoas affects rainfall in distant areas, according to study |
| 477 | cresce - legal - maracaju - rtcimi - conapub | 79 | Deforestation in the Amazon |
| 478 | lon - umedecer - ajudariam - argentina - grafos | 79 | Deforestation in Argentina affects rainfall |
| 479 | perto - chega - sobe - mil - estad | 78 | Deforestation in the Amazon observed at record high in recent years |
| 480 | petici - firma - amazonasenllamas - amazonas - prayforamazonas | 78 | Protecting the Amazon Rainforest |
| 481 | abelhas - nativas - renda - gera - cria | 78 | Beekeeping in the Amazon |
| 482 | caem - alertas - legal - permanecem - inaceit | 78 | Deforestation alerts in Brazilian Amazon |
| 483 | digo - florestal - novo - cresce - medi | 78 | Deforestation in the Amazon |
| 484 | incra - denuncia - mpf - respons - ter | 78 | MPF denounces INCRA for responsibility in Amazon deforestation |
| 485 | simples - jeito - entenda - um - seriu | 78 | Understanding Deforestation in Simple Terms |
| 486 | errados - dados - cresceu - falsos - discurso | 78 | Disputed data on Amazon deforestation |
| 487 | camiones - con - cargados - quemada - saliendo | 78 | Illegal Logging and Transportation of Soy in Brazil |
| 488 | copia - proposta - pt - contra - ressuscitar | 78 | Bolsonaro's environmental policies |
| 489 | chantageia - destr - pa - soja - leaks | 78 | Cybersecurity and Leaks: Soja Destr and Chantageia |
| 490 | assinem - manas - peti - assine - impedir | 77 | Protecting the Amazon rainforest |
| 491 | novembro - cresce - aponta - guinada - inpe | 77 | Deforestation in November |
| 492 | desaba - vira - lt - strong - boa | 77 | Deforestation in the Amazon |
| 493 | flagra - avi - ibama - uso - identificou | 77 | Illegal use of pesticides in the Amazon rainforest |
| 494 | preliminares - indicam - vios - queda - recorde | 76 | Deforestation in the Amazon |
| 495 | divulga - imazon - degrada - boletim - florestal | 76 | Deforestation in the Amazon |
| 496 | antecipadamente - avisa - far - obtido - exclusividade | 76 | Environmental regulations and enforcement in Brazil |
| 497 | lepera - cleo - luciano - anulastf - rede | 76 | Deforestation in the Amazon under Bolsonaro's government |
| 498 | diogo - pontos - ntara - alc - pasto | 76 | Deforestation in Brazil |
| 499 | gatilho - guedes - puxa - hipocrisia - acabou | 76 | "Guedes' Hypocrisy on Deforestation" |
| 500 | divulga - novos - mundogeo - inpe - dados | 76 | Deforestation in the Amazon |
| 501 | explode - dobro - anterior - quase - explodiu | 75 | Deforestation in Brazil |
| 502 | mato - grosso - quase - ilegal - ltimos | 75 | Illegal deforestation in Mato Grosso |
| 503 | causas - especialistas - cmr - submetida - supress | 75 | Deforestation monitoring in the Amazon |
| 504 | aw - tribo - brit - amea - ong | 75 | Illegal deforestation in the Amazon threatens indigenous tribes |
| 505 | trico - apag - risco - crescer - avan | 75 | Deforestation in Brazil |
| 506 | reduced - dramatically - moratorium - soy - brazil | 75 | Brazil's Soy Moratorium and Deforestation Reduction |
| 507 | setembro - alcan - km - paulona - ig | 74 | Deforestation in the Amazon in September |
| 508 | mentira - verde - grande - al - destrui | 74 | Environmental destruction through greenwashing |
| 509 | aumentou - obsess - devastou - zittonews - pib | 74 | Deforestation in the Amazon |
| 510 | agosto - cai - primeirojornal - em - na | 74 | Deforestation in the Amazon in August |
| 511 | paralisar - falta - salles - recursos - verba | 74 | Urgent need for resources to combat deforestation in the Amazon |
| 512 | impulsionou - reagem - discurso - cientistas - cr | 74 | Government's speech promotes deforestation and fires in the Amazon, according to study |
| 513 | bbc - mentira - verde - news - grande | 73 | Deforestation in Brazil: BBC News Investigation |
| 514 | digo - proposta - florestal - diretor - mudan | 73 | Reform Proposal for Forestry Sector Expectations |
| 515 | frigor - ficos - zerar - ajudar - reduziu | 73 | Use of fridge technology to reduce deforestation in the Amazon |
| 516 | segundo - cai - entre - bimestre - inpe | 73 | Deforestation in the Amazon |
| 517 | italianos - lites - telefones - caderninhos - monitorar | 73 | Monitoring deforestation in the Amazon using Italian satellite technology |
| 518 | fevereiro - atingiu - km - agencia - msn | 72 | Deforestation in Amazon reaches record high in February |
| 519 | siga - online - tvonline - tv - boletim | 72 | Legal aspects of online TV streaming services in Brazil |
| 520 | bife - prato - explica - seu - como | 72 | Deforestation and beef consumption |
| 521 | apresentam - alian - conter - agroneg - ongs | 72 | Proposals for controlling deforestation in the Amazon |
| 522 | atingem - sinais - semestre - bate - junho | 72 | Deforestation alerts in June reach record high |
| 523 | portas - ampliar - abre - congresso - para | 72 | Deforestation in the Amazon |
| 524 | explos - mostram - dados - revela - fragmentadas | 72 | Deforestation in the Amazon |
| 525 | metade - cies - esp - rvores - amea | 71 | Deforestation threatens half of Amazon tree species |
| 526 | calor - frio - infernal - inverno - quente | 71 | Extreme weather and deforestation in the Amazon |
| 527 | bater - volta - recorde - cresce - folha | 71 | Deforestation in the Amazon |
| 528 | temem - especialistas - efeito - aumenta - design | 70 | Deforestation in the Amazon under Bolsonaro's presidency |
| 529 | taxa - aumenta - ano - legal - sobe | 70 | Deforestation rates in the Amazon legal region increase over the past year |
| 530 | minera - respons - isensee - marcio - boi | 70 | Deforestation in the Amazon due to mining activities |
| 531 | instaura - ingressa - mpf - fase - protege | 70 | Illegal deforestation in Brazilian Amazon protected by MPF |
| 532 | junho - aumentou - imazon - diz - vemprarua | 70 | Deforestation in Amazon increases in June |
| 533 | prejud - interrompa - suzano - ma - atua | 70 | MPF demands Suzano stop deforestation in the Cerrado |
| 534 | fascista - fascismo - fascistas - esconder - nazista | 69 | Fascist regime attempts to cover up crimes |
| 535 | mt - imazon - aumenta - teamfollowback - mato | 69 | Deforestation in the Amazon |
| 536 | cai - na - nia - desmatamento - amaz | 69 | Deforestation in the Amazon |
| 537 | julho - tend - anual - diminui - atuam | 69 | Deforestation in the Amazon: Annual increase despite July decrease |
| 538 | aceit - mour - al - regi - vel | 69 | Deforestation in the Amazon |
| 539 | perde - hora - hectares - avan - florestas | 69 | Deforestation in the Amazon |
| 540 | aumentou - temer - governos - governo - reeleita | 69 | Deforestation in Brazil under Temer and Bolsonaro administrations |
| 541 | ministra - izabella - teixeira - aumentou - assegura | 68 | Deforestation in Brazil increases legally, according to minister |
| 542 | devast - setembro - fevereiro - aumentou - imazon | 68 | Deforestation in the Amazon region increases in months, according to INPE data from August to February, showing a devastating trend. |
| 543 | medido - ndice - menor - natureza - ltimos | 68 | Deforestation in Brazilian Amazon |
| 544 | friends - share - with - uol - folha | 68 | Deforestation in the Amazon |
| 545 | ap - consecutivos - volta - quatro - recua | 68 | Deforestation in the Amazon |
| 546 | terraviva - discutido - cop - sustent - pode | 67 | Deforestation in the Amazon |
| 547 | pirulla - abaixo - nasa - dia - meada | 67 | Deforestation in the Amazon |
| 548 | agosto - julho - cresce - entre - quase | 67 | Deforestation in the Amazon grows almost between August and July |
| 549 | armyhelptheplanet - armysavetheplanet - peti - assine - impedir | 67 | Army Helps Save the Planet |
| 550 | vig - conto - escala - planet - pan | 67 | Deforestation in the Amazon |
| 551 | detectou - imazon - quil - metros - km | 67 | Deforestation monitoring using Amazon's detection technology |
| 552 | requer - pedido - financiamento - vontade - bi | 66 | Norwegian government's initiative to reduce deforestation |
| 553 | tecnologias - ajudam - vigil - sete - desflorestamento | 66 | Use of technology in forest monitoring and management |
| 554 | suspens - todas - anuncia - ricardo - opera | 66 | Environmental policies in Brazil |
| 555 | glenn - agressivo - relaciona - quantidade - local | 66 | Deforestation in Brazil's Amazon region under Bolsonaro's plan |
| 556 | companhia - conter - ambientais - opera - ter | 66 | Environmental consulting services for forest conservation |
| 557 | desarticula - quadrilha - grilagem - grge - opera | 65 | Deforestation and illegal logging in the Amazon |
| 558 | multinacionais - couro - boicotar - querem - certificado | 65 | Multinational companies seek to boycott Brazilian leather and meat due to certification issues |
| 559 | huelgamundialporelcambioclim - ceniza - reduciendo - rojas - tienen | 65 | Deforestation and its impact on the environment |
| 560 | estima - emiss - inpe - queda - decl | 65 | Deforestation in the Amazon and its estimated emissions |
| 561 | lidera - rondonienses - cida - rolado - jacy | 65 | Deforestation in Rondonia, Brazil |
| 562 | div - recuperar - anual - taxa - quer | 65 | Deforestation in Brazilian Amazon |
| 563 | deixando - efeito - lises - gua - caatinga | 65 | Deforestation in the Caatinga region of Brazil |
| 564 | firmen - petici - firma - firmate - petizione | 65 | Petition to prevent deforestation in the Amazon |
| 565 | americana - universidade - dobro - registrado - seria | 65 | Deforestation in Brazil |
| 566 | saved - corporate - pledges - won - he | 65 | Corporate pledges to save Brazil's Cerrado forests |
| 567 | far - opera - reduzir - ibama - fiscaliza | 65 | Brazilian government's efforts to reduce deforestation in the Amazon |
| 568 | oculto - patrocina - sos - fiscais - dinheiro | 64 | Illegal financial activities in the Amazon rainforest |
| 569 | verbete - terrasp - ecossistemabrasileiro - desmatamentonaamaz - blicas | 64 | Deforestation in Brazil |
| 570 | registrar - volta - cresceu - passado - agosto | 64 | Deforestation in the Amazon |
| 571 | gases - emiss - reduziu - estufa - redu | 64 | Reducing greenhouse gas emissions through deforestation prevention |
| 572 | winning - war - saving - on - deforestation | 64 | Protecting Amazonia: The War Against Deforestation |
| 573 | pandemia - aproveitar - passar - momento - aproveitando | 64 | "Minister's plan to exploit pandemic for Amazon deforestation" |
| 574 | comenta - ministra - ambiente - meio - dados | 64 | Deforestation in the Cerrado region |
| 575 | justi - denuncia - incra - mpf - respons | 63 | MPF denounces INCRA's responsibility for Amazon deforestation |
| 576 | financeira - debates - cita - debate - trump | 63 | Biden's stance on Amazon rainforest deforestation and financial aid to Brazil during presidential debates |
| 577 | cala - comemora - quatro - esquerda - menor | 63 | Government celebrates minor deforestation decrease in Amazon in four years |
| 578 | aumentar - imazon - estudo - amozonia - vai | 63 | Deforestation in the Amazon |
| 579 | abril - cresceu - aponta - aumenta - cartacapital | 63 | Deforestation in the Amazon increases in April, according to INPE |
| 580 | cai - legal - aponta - perderam - aps | 63 | Deforestation in the Amazon legal issue |
| 581 | confirma - recursos - minist - instala - bases | 63 | Combating deforestation in the Amazon |
| 582 | gelada - mentira - fosse - aquecimento - verdade | 62 | Causes of global warming: Debunking the myth of deforestation |
| 583 | epidemia - dico - xima - caminho - pr | 62 | Epidemic Alert: Deforestation in the Amazon |
| 584 | registrado - perdeu - menor - agosto - entre | 62 | Deforestation in Brazil |
| 585 | relat - ilegal - aponta - brasileira - narcotrafic | 62 | Illegal deforestation in Brazilian Amazon linked to drug cartels |
| 586 | junho - maior - anos - aponta - desmataram | 62 | Deforestation in June: Highest in Years |
| 587 | estimam - triplicar - cen - cientistas - pode | 62 | Deforestation in the Amazon under Bolsonaro's government |
| 588 | prever - interoce - rasga - rodovia - iniciam | 62 | Impacts of Infrastructure Development in the Amazon Region |
| 589 | brian - mier - assumam - estimulando - estar | 61 | Bolsonaro's environmental policies |
| 590 | vargas - hrs - ae - avenida - puder | 61 | Protests in Rio de Janeiro against deforestation in the Amazon |
| 591 | jornaloglobo - aumentou - primeiro - ano - miriamleitaocom | 61 | Deforestation in the Amazon increases |
| 592 | unidades - conserva - cresce - meio - medi | 61 | Deforestation in Amazonian units |
| 593 | bact - diversidade - reduz - rias - estudo | 61 | Deforestation reduces bacterial diversity in Amazon soil, study finds |
| 594 | luz - emerg - liga - conta - clim | 61 | Deforestation and Climate Change |
| 595 | custo - custar - hectare - custam - milh | 61 | Cost of deforestation in the Amazon |
| 596 | semestre - primeiro - mostra - ltimos - imazon | 61 | Deforestation in Brazil |
| 597 | fracassam - miss - militares - conter - especial | 61 | Military efforts to contain deforestation in the Amazon |
| 598 | isa - triplo - genas - terras - ind | 61 | Deforestation in Brazilian Indigenous Lands |
| 599 | cai - legal - inpe - diz - respeitado | 61 | Deforestation in the Amazon: Legal and Regulatory Aspects |
| 600 | petici - firma - la - legendarios - marcosmion | 61 | Legendarios Marcos Mion: Petição para impedir desmatamento na Amazônia |
| 601 | proporcionalmente - pesquisador - maior - cerrado - foi | 60 | Deforestation of the Cerrado |
| 602 | mico - econ - valor - micos - morat | 60 | Economic impact of soy farming in the Amazon |
| 603 | indeterminado - renovada - morat - xima - tempo | 60 | Renewable soybean moratorium |
| 604 | cap - tulo - elei - biden - eua | 60 | Political satire: Biden's cap tulo |
| 605 | seis - bate - mar - recorde - ltimos | 60 | Deforestation in the Amazon breaks records |
| 606 | outu - chegou - afirma - avan - homic | 60 | Deforestation in the Amazon |
| 607 | minist - cai - quase - rio - ano | 60 | Deforestation in the Amazon |
| 608 | temperatura - elevar - pode - aumentar - graus | 60 | Deforestation and temperature increase |
| 609 | lobo - guar - dula - ilustrar - escolhido | 60 | Deforestation and its impact on wolves |
| 610 | ingl - ver - combate - developmentouroboros - ouroboros | 60 | Combat of Bolsonaro's government against deforestation in the Amazon |
| 611 | desmamamento - problema - nao - preocupadissima - febraban | 60 | Deforestation in the Amazon |
| 612 | check - latest - article - thanks - out | 59 | Deforestation in the Amazon |
| 613 | reinaldo - azevedo - bens - velocidade - secas | 59 | Deforestation in the Amazon |
| 614 | junho - aumenta - comparado - amigosdadilma - jessy | 59 | Deforestation in June |
| 615 | agentes - ataques - ibama - explos - equipes | 59 | Attacks on environmental agents in Brazil |
| 616 | boicotam - theft - spurs - experts - investment | 59 | Land theft and deforestation in Brazil due to US investment |
| 617 | anuncia - ministra - ilegal - cristina - contra | 59 | Brazilian government takes action against illegal deforestation in the Amazon |
| 618 | amazoniasos - amazoniaemchamas - amazonialife - amazoniaenossa - amazoniabrasileira | 59 | Protecting the Amazon Rainforest |
| 619 | unidades - conserva - ocorreu - garimpo - ocorreram | 59 | Deforestation for gold mining in the Amazon |
| 620 | gua - falta - sudeste - causas - ligada | 59 | Deforestation in the Amazon and its impact on water supply |
| 621 | nativa - rvore - surto - reverter - ajudar | 59 | Conservation of Native Forests |
| 622 | prop - stria - zero - partiu - mpf | 59 | Zero deforestation agreement for meat industry in Brazil |
| 623 | fevereiro - instituto - cresce - diz - dilmapedeprasair | 59 | Deforestation in Brazil in February according to institute |
| 624 | comparada - dobro - natureza - janeiro - setembro | 59 | Deforestation alerts in the Amazon |
| 625 | senten - demorar - processo - faz - ter | 58 | Deforestation in the Amazon |
| 626 | coibir - manda - autoriza - general - dentro | 58 | Deforestation in Brazil under Bolsonaro |
| 627 | reduz - abc - legal - brasil - di | 58 | Brazil reduces deforestation in the Amazon through legal measures |
| 628 | fazendeira - multada - jatob - ip - nativas | 58 | Illegal deforestation in Brazilian Cerrado |
| 629 | lites - indicam - sat - queda - olevantamento | 58 | Deforestation in the Amazon |
| 630 | mercosul - ratificar - acordo - ue - alemanha | 58 | Germany's stance on ratifying Mercosur agreement due to Amazon deforestation |
| 631 | impeachment - impeachmentbolsonarourgente - pedidos - impeach - paramos | 58 | Impeachment and deforestation in Brazil |
| 632 | novembro - sobe - janeiro - entre - intervalo | 58 | Deforestation in the Amazon during November and January |
| 633 | monitoramento - plantio - resultados - morat - far | 58 | Monitoring of soy farms in the Amazon |
| 634 | cresce - iirsa - jornaldacbn - okariri - russa | 58 | Deforestation in the Amazon |
| 635 | transforma - ltimas - consolida - acumulada - cadas | 58 | Deforestation and land transformation in the Amazon |
| 636 | velho - porto - pio - munic - frien | 58 | Deforestation in Porto Velho |
| 637 | carros - emite - co - dobro - gera | 58 | Emissions from cars |
| 638 | comercializar - prorroga - moratoria - abrasilia - amazon | 57 | Brazil extends moratorium on Amazon soy commercialization |
| 639 | traves - val - baiano - resposavel - arvores | 57 | Deforestation in the Amazon |
| 640 | paribas - bnp - restrictive - policy - financiar | 57 | BNP Paribas' policy on deforestation in Amazon and Cerrado regions |
| 641 | fevereiro - aumentou - legal - renunciedilma - em | 57 | Deforestation in Brazil's Amazon region in February |
| 642 | monitoramento - sistemas - real - monitorado - tempo | 57 | Monitoring and Aprimorando Sistemas de Desmatamento |
| 643 | reuters - paulo - destrui - avan - aumentou | 57 | Deforestation in the Amazon |
| 644 | soja - montsanto - contacomigobrasil - intanhang - huahuahuahua | 57 | Impact of Soybean Farming in Brazil |
| 645 | confirmado - degrada - sobe - janeiro - ong | 57 | Deforestation in the Amazon |
| 646 | poluem - ricos - quanto - tanto - ses | 57 | Environmental degradation in the Amazon region |
| 647 | duas - vezes - dobro - maior - cainarede | 57 | Deforestation in the Cerrado region |
| 648 | noruegu - sucesso - estagnado - reconhece - relat | 56 | Success of Brazil in combating deforestation |
| 649 | sobe - meses - ltimos - nos - inpe | 56 | Deforestation in the Amazon |
| 650 | reduz - tedxm - reduziu - gilberto - dez | 56 | Brazil reduces deforestation in the Amazon |
| 651 | sofre - crescente - consecutivo - degrada - aumentou | 56 | Deforestation in the Amazon |
| 652 | amazonia - bilhao - landrights - defloresta - dolares | 56 | Deforestation in the Amazon |
| 653 | pe - alimentos - produzir - poss - desmatar | 56 | Agricultural production in the Amazon rainforest |
| 654 | duas - desmatadas - paulo - estadao - aumento | 56 | Deforestation in Brazil |
| 655 | segundo - imazon - cresce - ano - um | 56 | Deforestation in the Amazon |
| 656 | safra - afetar - reduz - chuvas - sul | 56 | Climate Impacts on Southern Amazon Crops |
| 657 | sobe - setembro - at - ano - inpe | 56 | Deforestation in the Amazon |
| 658 | sustentabilidade - estad - avan - mobile - requerer | 56 | Deforestation and its impact on sustainability in Brazil |
| 659 | colniza - retrato - ada - gelo - amea | 56 | Deforestation in Mato Grosso |
| 660 | etanol - manchar - impulsionar - responsabilizam - cana | 56 | Ethanol industry's environmental impact |
| 661 | izabella - anunci - teixeira - ltimo - caiu | 56 | Minister Izabella Teixeira announces decrease in deforestation rate |
| 662 | toneladas - carbono - co - emiss - ap | 56 | Carbon emissions from Amazonian deforestation |
| 663 | supera - motosserras - oficial - dado - indica | 56 | Deforestation in the Cerrado region exceeds official data |
| 664 | fatura - cobra - afetar - acumulado - clima | 56 | Deforestation and its impact on climate change |
| 665 | peti - assine - impedir - explora - rapidinho | 56 | Stop Deforestation in the Amazon |
| 666 | zos - preju - causar - clim - alteram | 56 | Impacts of climate change on the Amazon rainforest |
| 667 | imagens - aceleradas - trecho - lite - mostram | 56 | Deforestation in the Amazon |
| 668 | recomponham - enfraquecer - florestasemcortes - atua - corta | 56 | Deforestation and environmental degradation in Brazil |
| 669 | hattem - marcel - perfeito - resumo - partid | 55 | Marcel Van Hattem's Perfect Resume |
| 670 | guias - perdoa - morrendo - fome - maiores | 55 | Food insecurity due to deforestation in the Amazon |
| 671 | cresce - ano - inpe - diz - pedrogiovany | 55 | Deforestation in Brazil |
| 672 | foradilma - anta - viva - deixar - marina | 55 | Anta's plan to end the Amazon and leave us without water |
| 673 | microbiana - homogeneiza - bact - diversidade - solo | 55 | Microbial diversity in soil |
| 674 | aviagens - mapthesystem - brazil - tractors - forgotten | 55 | Deforestation in Brazil's Cerrado |
| 675 | liga - europa - empresas - eua - ong | 55 | Businesses and environmental organizations in Europe and the US advocating against deforestation in the Amazon |
| 676 | menor - atinge - registra - conservados - corrigiu | 55 | Minor deforestation recorded in years, according to government |
| 677 | esperar - siga - online - tvonline - suspeitas | 55 | Fake news or false message |
| 678 | fungo - melhora - plantada - desenvolvimento - bioma | 55 | Fungi-based soil amendments for improving soybean growth and development |
| 679 | confirma - reutersde - quar - divulgados - nesta | 55 | Deforestation in Brazil |
| 680 | atinge - sobe - mil - km - eleicoesal | 54 | Deforestation in Alagoas, Brazil |
| 681 | marca - dez - pior - atinge - cresce | 54 | Deforestation in Brazil reaches worst mark in 10 years |
| 682 | indeniza - advocacia - agu - cobra - ajuizadas | 54 | Illegal deforestation in the Amazon |
| 683 | envolverde - registra - ag - maio - hist | 54 | Deforestation in Brazil |
| 684 | noutra - ladroagem - pecados - documentado - scoa | 54 | Corruption and environmental damage under the Bolsonaro government |
| 685 | cai - paratequerogrande - mexeu - caiu - geotropismo | 54 | Deforestation in the Amazon |
| 686 | erra - relativo - ricardo - zero - salles | 54 | Ricardo Salles' claims of zero deforestation in Amazonia |
| 687 | sanders - bernie - democratas - senadores - acusam | 54 | Bernie Sanders and Democratic Senators Accuse Bolsonaro of Supporting Amazon Deforestation |
| 688 | protesto - vargas - hrs - avenida - compartilhar | 54 | Environmental activism in Rio de Janeiro |
| 689 | marketing - corporativo - solu - morat - ou | 53 | Marketing Leaks and Corporate Responsibility |
| 690 | agu - cobra - bilh - legal - infratores | 53 | Illegal deforestation in the Amazon |
| 691 | outubro - aumentou - segundo - imazon - km | 53 | Deforestation in the Amazon in October increased according to IMazon |
| 692 | sigilosas - obstruiu - vazou - destroem - suspende | 53 | Environmental policy and regulation |
| 693 | espaciais - pesquisas - instituto - nacional - espacial | 53 | Deforestation in Brazil's Amazon region, as reported by INPE (National Institute for Space Research) |
| 694 | apontou - apresentou - amap - quil - quadrados | 53 | AMAP's report on deforestation in the Amazon |
| 695 | combat - causas - lo - como - da | 53 | Deforestation in the Amazon: causes and solutions |
| 696 | coordenadora - exonera - ap - alerta - recorde | 53 | Deforestation alert in Brazil |
| 697 | bimestre - primeiro - atinge - aponta - km | 53 | Deforestation in Brazil's Amazon region |
| 698 | registrado - ndice - menor - desmatad - confundimos | 53 | Deforestation in Brazil |
| 699 | dobra - planta - desmatada - rea - ppi | 53 | Agricultural expansion in Brazilian Amazon |
| 700 | crime - haddad - ditar - responsabilidade - pergunta | 53 | Responsibility for environmental damage in the Amazon |
| 701 | plantas - extin - levar - cies - esp | 53 | Deforestation of the Cerrado and its impact on plant species |
| 702 | expressivo - acusa - imazon - aumento - detec | 53 | Deforestation in the Amazon |
| 703 | peru - limpada - cresceu - frente - aos | 53 | Deforestation in Peru |
| 704 | sobe - ong - meses - ltimos - nos | 53 | Deforestation in the Amazon |
| 705 | decreta - escrevo - prezado - deputado - vote | 53 | Illegal deforestation in Brazil |
| 706 | saving - war - winning - bbc - news | 52 | Saving Amazonia: BBC News Story |
| 707 | julh - cai - segundo - desmatado - total | 52 | Deforestation in the Amazon |
| 708 | dispara - sob - cresce - bolsonaro - flamengo | 52 | Deforestation in the Amazon under Bolsonaro's governance |
| 709 | pecu - respons - seguidor - topblog - ria | 52 | Responsibility of cattle ranchers in Amazon deforestation |
| 710 | seringueiros - aceleram - extrativismo - troca - sob | 52 | Deforestation and cattle ranching under Bolsonaro's government |
| 711 | tulo - reduz - dar - aos - terras | 52 | Land titling for indigenous communities reduces deforestation in the Amazon |
| 712 | sobem - meses - alertas - sobe - alerta | 52 | Deforestation alerts in Brazilian Amazon |
| 713 | julho - aponta - estudo - aumenta - legal | 52 | Deforestation in the Amazon: Legal Increase in July |
| 714 | tropa - atuar - especial - vai - contra | 52 | Government special forces to combat deforestation in the Amazon |
| 715 | advinha - ministro - prev - adivinha - causado | 52 | Deforestation in the Amazon |
| 716 | filme - pata - netflix - boi - document | 52 | Deforestation in the Amazon and its Impact on Cattle Ranching |
| 717 | ministra - senhora - vc - marido - era | 52 | Political figure's environmental record |
| 718 | inventa - desmentido - nimo - caiu - mour | 52 | Myths and misconceptions about the Amazon rainforest |
| 719 | agrofloresta - hectare - rentabilidade - sustenta - semi | 52 | Economic feasibility of agroforestry in Brazil |
| 720 | cran - scandale - disparition - fum - colombie | 52 | Deforestation in Colombia and Brazil |
| 721 | underground - plows - demand - meat - up | 51 | Global meat demand and its impact on the environment, specifically in Brazil. |
| 722 | dengue - malaria - mal - mosquito - mosquitos | 51 | Disease transmission through mosquitoes in the Amazon region |
| 723 | tandatangani - petisi - impedir - explora - via | 51 | Prevent Deforestation in the Amazon |
| 724 | partidos - stf - plano - preven - retomada | 51 | Political parties demand effective execution of the plan to prevent deforestation in the Amazon |
| 725 | doem - puderem - pegando - assinem - ta | 51 | Forest fires in the Pantanal region |
| 726 | sucesso - jn - teve - dez - reduz | 51 | Success of Brazil in combating deforestation in the Amazon |
| 727 | diminui - taxa - meses - ficou - venilsonfer | 51 | Deforestation rates in the Amazon decrease in months |
| 728 | aquece - geleiras - oceanos - congelada - sol | 51 | Deforestation and its effects on climate change |
| 729 | tamanho - leia - sp - quase - legal | 51 | Deforestation in the Amazon |
| 730 | aumento - rela - registrando - altos - detectou | 51 | Deforestation in the Amazon region |
| 731 | detecta - imazon - dois - ibirapuera - equivale | 50 | Deforestation in Amazon detected by Imazon |
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: None
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: True
* zeroshot_min_similarity: 0.7
* zeroshot_topic_list: None
## Framework versions
* Numpy: 1.25.2
* HDBSCAN: 0.8.33
* UMAP: 0.5.6
* Pandas: 2.0.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.7.0
* Transformers: 4.40.2
* Numba: 0.58.1
* Plotly: 5.15.0
* Python: 3.10.12
|
xuliu15/openai-whisper-small-frisian-32r-10h | xuliu15 | 2024-05-18T05:57:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-18T05:57:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
WebraftAI/synapsellm-7b-mistral-v0.5-preview2 | WebraftAI | 2024-05-18T05:55:26Z | 1,504 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"code",
"conversational",
"en",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-09T16:06:28Z | ---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- code
model-index:
- name: synapsellm-7b-mistral-v0.5-preview2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 52.22
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=WebraftAI/synapsellm-7b-mistral-v0.5-preview2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 75.54
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=WebraftAI/synapsellm-7b-mistral-v0.5-preview2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 51.64
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=WebraftAI/synapsellm-7b-mistral-v0.5-preview2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 55.47
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=WebraftAI/synapsellm-7b-mistral-v0.5-preview2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 73.09
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=WebraftAI/synapsellm-7b-mistral-v0.5-preview2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 27.6
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=WebraftAI/synapsellm-7b-mistral-v0.5-preview2
name: Open LLM Leaderboard
---
# SynapseLLM:
SynapseLLM, a significant achievement by WebraftAI, represents a series of large language AI models designed to create robust, generalized, and decentralized information systems. This repository specifically houses the SynapseLLM finetuned version of Mistral. The finetuning process is conducted on a custom dataset, albeit limited in scope, focusing on code and normal question-answering scenarios. This adaptation showcases the model's versatility and applicability within specific domains, contributing to the broader landscape of AI advancements.
## Model Details
**SynapseLLM:**
- Parameters: 7B
- Learning rate: 2e-4
- Adapter used: Qlora
- Precision: float16
- Batch size: 32
- Maximum gradient normal: 0.3
- Optimizer: paged_adamw_32bit
- Warmup Ratio: 0.03
- Step(s) (trained): 2000
- Epoch(s) (trained): 1
### Model Description
This is a 7b parameter, decoder only transformer based finetuned model on Chat Q/A and Code instructions. It's a preview finetune on Mistral 7B v0.1 on a sample dataset of 1.54M rows comprising of 361k Maths Instruct Q/A, 143k GPT-3.5 Q/A, 140k General Code, 63k Python code, and 900k General Q/A (Through GPT-4) [Each row contains one instruction and one response]. This is a full model merged and compiled with trained adapters, so you can easily load this through transformers library.
- **Developed by:** WebraftAI
- **Funded by:** Webraft Cloud
- **Shared by:** WebraftAI
- **Model type:** Decoder-only Transformer
- **Language(s):** English Only
- **License:** Apache 2.0
- **Finetuned from model:** Mistral-7b-v0.1
### Prompt format:
This model follows the same prompt format as mistral instruct 7b v0.1 .The sample prompt is still given below:
```text
<s>[INST] Hello, how are you? [/INST]
```
### Example Code:
Here's an example code using `transformers` library provided by HF.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("WebraftAI/synapsellm-7b-mistral-v0.5-preview2")
model = AutoModelForCausalLM.from_pretrained("WebraftAI/synapsellm-7b-mistral-v0.5-preview2")
prompt= "<s>[INST] Hello! [/INST] "
device = "cuda"
model_inputs = tokenizer([prompt], return_tensors="pt").to(device)
model.to(device)
generated_ids = model.generate(**model_inputs, max_new_tokens=100, do_sample=True)
print(tokenizer.batch_decode(generated_ids)[0])
```
### Model Bias:
This model has some bias areas, discussed below:
- Model might output factually incorrect information.
- Model does not follow system prompts.
- Model does not have any kind of memory, researchers can experiment feeding memory.
- Model is trained on different datas, so it can bias information or exclaim itself as gpt model.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_WebraftAI__synapsellm-7b-mistral-v0.5-preview2)
| Metric |Value|
|---------------------------------|----:|
|Avg. |55.93|
|AI2 Reasoning Challenge (25-Shot)|52.22|
|HellaSwag (10-Shot) |75.54|
|MMLU (5-Shot) |51.64|
|TruthfulQA (0-shot) |55.47|
|Winogrande (5-shot) |73.09|
|GSM8k (5-shot) |27.60|
|
damgomz/ThunBERT_bs16_lr4 | damgomz | 2024-05-18T05:51:14Z | 118 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"pretraining",
"fill-mask",
"en",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-15T09:46:14Z | ---
language: en
tags:
- fill-mask
kwargs:
timestamp: '2024-05-11T13:43:05'
project_name: ThunBERT_bs16_lr4_emissions_tracker
run_id: 2127a736-b71b-4645-9087-b3b853e3658a
duration: 169047.08735966682
emissions: 0.1769390204060543
emissions_rate: 1.0466848211918404e-06
cpu_power: 42.5
gpu_power: 0.0
ram_power: 37.5
cpu_energy: 1.9956917336564937
gpu_energy: 0
ram_energy: 1.7608956085666998
energy_consumed: 3.756587342223187
country_name: Switzerland
country_iso_code: CHE
region: .nan
cloud_provider: .nan
cloud_region: .nan
os: Linux-5.14.0-70.30.1.el9_0.x86_64-x86_64-with-glibc2.34
python_version: 3.10.4
codecarbon_version: 2.3.4
cpu_count: 4
cpu_model: Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz
gpu_count: .nan
gpu_model: .nan
longitude: .nan
latitude: .nan
ram_total_size: 100
tracking_mode: machine
on_cloud: N
pue: 1.0
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 169047.08735966682 |
| Emissions (Co2eq in kg) | 0.1769390204060543 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 37.5 |
| CPU energy (kWh) | 1.9956917336564937 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 1.7608956085666998 |
| Consumed energy (kWh) | 3.756587342223187 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 4 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.3254156431673586 |
| Emissions (Co2eq in kg) | 0.0662101092158695 |
## Note
15 May 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ThunBERT_bs16_lr4 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 0.0005 |
| batch_size | 16 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 41045 |
## Training and Testing steps
Epoch | Train Loss | Test Loss
---|---|---
| 0.0 | 6.672004 | 11.475011 |
| 0.5 | 7.858635 | 7.770452 |
| 1.0 | 7.738486 | 7.745920 |
| 1.5 | 7.705243 | 7.718904 |
| 2.0 | 7.692092 | 7.719356 |
| 2.5 | 7.684155 | 7.708430 |
| 3.0 | 7.674921 | 7.695158 |
|
kawagoshi-llm-team/llama3_en95B_ja85B | kawagoshi-llm-team | 2024-05-18T05:49:48Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-18T05:45:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DUAL-GPO-2/phi-2-gpo-v7-i2 | DUAL-GPO-2 | 2024-05-18T05:48:28Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"phi",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"custom_code",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:DUAL-GPO-2/phi-2-gpo-v34-merged-i1",
"base_model:adapter:DUAL-GPO-2/phi-2-gpo-v34-merged-i1",
"region:us"
] | null | 2024-05-18T01:07:26Z | ---
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
base_model: DUAL-GPO-2/phi-2-gpo-v34-merged-i1
datasets:
- HuggingFaceH4/ultrafeedback_binarized
model-index:
- name: phi-2-gpo-v7-i2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2-gpo-v7-i2
This model is a fine-tuned version of [DUAL-GPO-2/phi-2-gpo-v34-merged-i1](https://huggingface.co/DUAL-GPO-2/phi-2-gpo-v34-merged-i1) on the HuggingFaceH4/ultrafeedback_binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 3
- gradient_accumulation_steps: 4
- total_train_batch_size: 48
- total_eval_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2 |
dandan27/My-Voices | dandan27 | 2024-05-18T05:41:17Z | 0 | 0 | fasttext | [
"fasttext",
"music",
"art",
"ja",
"dataset:Cohere/wikipedia-2023-11-embed-multilingual-v3",
"license:mit",
"region:us"
] | null | 2024-03-19T09:18:33Z | ---
license: mit
datasets:
- Cohere/wikipedia-2023-11-embed-multilingual-v3
language:
- ja
metrics:
- character
library_name: fasttext
tags:
- music
- art
--- |
PeacefulData/GenTranslate | PeacefulData | 2024-05-18T05:37:07Z | 0 | 2 | null | [
"generative translation",
"large language model",
"LLaMA",
"text-generation",
"en",
"zh",
"ja",
"fr",
"es",
"it",
"pt",
"dataset:PeacefulData/HypoTranslate",
"arxiv:2402.06894",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-02-10T05:28:43Z | ---
license: apache-2.0
language:
- en
- zh
- ja
- fr
- es
- it
- pt
tags:
- generative translation
- large language model
- LLaMA
metrics:
- bleu
pipeline_tag: text-generation
datasets:
- PeacefulData/HypoTranslate
---
This repo releases the trained LLaMA-adapter weights in paper "GenTranslate: Large Language Models are Generative Multilingual Speech and Machine Translators".
**Code:** https://github.com/YUCHEN005/GenTranslate
**Data:** https://huggingface.co/datasets/PeacefulData/HypoTranslate
**Model:** This repo
***Filename format:*** [data\_source]\_[src\_language\_code]\_[tgt\_language\_code]\_[task].pth
e.g. covost2_ar_en_st.pth
***Note:***
- Language code look-up: Table 15 & 17 in https://arxiv.org/pdf/2402.06894.pdf
- Source/target language refers to the translation task, so that the N-best hypotheses and ground-truth transcription are both in target language
- For speech translation datasets (FLEURS, CoVoST-2, MuST-C), the task ID "mt" denotes cascaded ASR+MT system
If you consider this work would be related or useful for your research, please kindly consider to cite the work below. Thank you.
```bib
@inproceedings{hu2024gentranslate,
title = "GenTranslate: Large Language Models are Generative Multilingual Speech and Machine Translators",
author = "Hu, Yuchen and Chen, Chen and Yang, Chao-Han Huck and Li, Ruizhe and Zhang, Dong and Chen, Zhehuai and Chng, Eng Siong",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
publisher = "Association for Computational Linguistics",
year = "2024"
}
``` |
shkna1368/mt5-small-finetuned-mt5-small-poem4e | shkna1368 | 2024-05-18T05:36:44Z | 116 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-18T04:53:19Z | ---
license: apache-2.0
base_model: google/mt5-small
tags:
- generated_from_trainer
model-index:
- name: mt5-small-finetuned-mt5-small-poem4e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-mt5-small-poem4e
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 121 | nan |
| No log | 2.0 | 242 | nan |
| No log | 3.0 | 363 | nan |
| No log | 4.0 | 484 | nan |
| 0.0 | 5.0 | 605 | nan |
| 0.0 | 6.0 | 726 | nan |
| 0.0 | 7.0 | 847 | nan |
| 0.0 | 8.0 | 968 | nan |
| 0.0 | 9.0 | 1089 | nan |
| 0.0 | 10.0 | 1210 | nan |
| 0.0 | 11.0 | 1331 | nan |
| 0.0 | 12.0 | 1452 | nan |
| 0.0 | 13.0 | 1573 | nan |
| 0.0 | 14.0 | 1694 | nan |
| 0.0 | 15.0 | 1815 | nan |
| 0.0 | 16.0 | 1936 | nan |
| 0.0 | 17.0 | 2057 | nan |
| 0.0 | 18.0 | 2178 | nan |
| 0.0 | 19.0 | 2299 | nan |
| 0.0 | 20.0 | 2420 | nan |
| 0.0 | 21.0 | 2541 | nan |
| 0.0 | 22.0 | 2662 | nan |
| 0.0 | 23.0 | 2783 | nan |
| 0.0 | 24.0 | 2904 | nan |
| 0.0 | 25.0 | 3025 | nan |
| 0.0 | 26.0 | 3146 | nan |
| 0.0 | 27.0 | 3267 | nan |
| 0.0 | 28.0 | 3388 | nan |
| 0.0 | 29.0 | 3509 | nan |
| 0.0 | 30.0 | 3630 | nan |
| 0.0 | 31.0 | 3751 | nan |
| 0.0 | 32.0 | 3872 | nan |
| 0.0 | 33.0 | 3993 | nan |
| 0.0 | 34.0 | 4114 | nan |
| 0.0 | 35.0 | 4235 | nan |
| 0.0 | 36.0 | 4356 | nan |
| 0.0 | 37.0 | 4477 | nan |
| 0.0 | 38.0 | 4598 | nan |
| 0.0 | 39.0 | 4719 | nan |
| 0.0 | 40.0 | 4840 | nan |
| 0.0 | 41.0 | 4961 | nan |
| 0.0 | 42.0 | 5082 | nan |
| 0.0 | 43.0 | 5203 | nan |
| 0.0 | 44.0 | 5324 | nan |
| 0.0 | 45.0 | 5445 | nan |
| 0.0 | 46.0 | 5566 | nan |
| 0.0 | 47.0 | 5687 | nan |
| 0.0 | 48.0 | 5808 | nan |
| 0.0 | 49.0 | 5929 | nan |
| 0.0 | 50.0 | 6050 | nan |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
farzanrahmani/distilbert_base_uncased_question_answering | farzanrahmani | 2024-05-18T05:36:09Z | 132 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"question-answering",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-05-18T05:33:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Annki/dqn-BreakoutNoFrameskip-v4 | Annki | 2024-05-18T05:28:43Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"BreakoutNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-18T05:28:28Z | ---
library_name: stable-baselines3
tags:
- BreakoutNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BreakoutNoFrameskip-v4
type: BreakoutNoFrameskip-v4
metrics:
- type: mean_reward
value: 3.10 +/- 4.76
name: mean_reward
verified: false
---
# **DQN** Agent playing **BreakoutNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **BreakoutNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env BreakoutNoFrameskip-v4 -orga Annki -f logs/
python -m rl_zoo3.enjoy --algo dqn --env BreakoutNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env BreakoutNoFrameskip-v4 -orga Annki -f logs/
python -m rl_zoo3.enjoy --algo dqn --env BreakoutNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env BreakoutNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env BreakoutNoFrameskip-v4 -f logs/ -orga Annki
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 100000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
nsugianto/detr-resnet50_finetuned_detrresnet50_lsdocelementdetv1type7_v2_s2_1940s | nsugianto | 2024-05-18T05:16:30Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | 2024-05-17T17:08:24Z | ---
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
model-index:
- name: detr-resnet50_finetuned_detrresnet50_lsdocelementdetv1type7_v2_s2_1940s
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet50_finetuned_detrresnet50_lsdocelementdetv1type7_v2_s2_1940s
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.19.1
|
abc88767/7sc100 | abc88767 | 2024-05-18T05:01:24Z | 142 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-18T04:59:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ahmedesmail16/Train-Augmentation-vit-base | ahmedesmail16 | 2024-05-18T04:56:57Z | 224 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-18T02:07:51Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Train-Augmentation-vit-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Train-Augmentation-vit-base
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9262
- Accuracy: 0.7866
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6254 | 0.99 | 93 | 0.8623 | 0.7194 |
| 0.2129 | 2.0 | 187 | 0.7057 | 0.7510 |
| 0.0877 | 2.99 | 280 | 0.8545 | 0.7194 |
| 0.0164 | 4.0 | 374 | 0.9221 | 0.7549 |
| 0.0057 | 4.99 | 467 | 0.8149 | 0.7708 |
| 0.0021 | 6.0 | 561 | 0.8764 | 0.7866 |
| 0.0016 | 6.99 | 654 | 0.9059 | 0.7905 |
| 0.0013 | 8.0 | 748 | 0.9132 | 0.7866 |
| 0.0011 | 8.99 | 841 | 0.9236 | 0.7866 |
| 0.0013 | 9.95 | 930 | 0.9262 | 0.7866 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.19.1
- Tokenizers 0.15.2
|
sjlee311/sjlee311bart-large-cnn-finetuned | sjlee311 | 2024-05-18T04:56:07Z | 123 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large-cnn",
"base_model:finetune:facebook/bart-large-cnn",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-18T04:08:42Z | ---
license: mit
base_model: facebook/bart-large-cnn
tags:
- generated_from_trainer
metrics:
- rouge
- precision
- recall
- f1
model-index:
- name: sjlee311bart-large-cnn-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sjlee311bart-large-cnn-finetuned
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1557
- Rouge1: 49.9356
- Rouge2: 14.8574
- Rougel: 22.2849
- Precision: 86.7404
- Recall: 86.4333
- F1: 86.584
- Hashcode: roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.35.2)
- Fkgl: 10.01
- Cloze Score: 17.05
- Reading Level 13-15: 110
- Reading Level 11-12: 39
- Reading Level 16+: 85
- Reading Level 9-10: 7
- Reading Level Mode: 13-15
- Summac Val: 0.57
- Gen Len: 434.7842
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.2+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
Marina-C/llama3_8B_finance_qa | Marina-C | 2024-05-18T04:48:00Z | 3 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:adapter:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"region:us"
] | null | 2024-05-17T20:48:31Z | ---
license: llama3
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: meta-llama/Meta-Llama-3-8B
model-index:
- name: llama3_8B_finance_qa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3_8B_finance_qa
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.1517 | 1.0 | 5542 | 1.2088 |
| 1.0983 | 2.0 | 11084 | 1.2199 |
| 0.9158 | 3.0 | 16626 | 1.2556 |
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.2.1+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2 |
forgetUserName/phi-2-role-play | forgetUserName | 2024-05-18T04:46:33Z | 2 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:microsoft/Phi-3-mini-128k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-128k-instruct",
"license:mit",
"region:us"
] | null | 2024-05-18T04:46:29Z | ---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: microsoft/Phi-3-mini-128k-instruct
model-index:
- name: phi-2-role-play
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2-role-play
This model is a fine-tuned version of [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.17.0
- Tokenizers 0.15.2 |
Zoyd/suzume-llama-3-8B-multilingual-6_5bpw-exl2 | Zoyd | 2024-05-18T04:34:35Z | 11 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:quantized:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2024-05-18T03:46:45Z | ---
license: other
license_name: llama-3
license_link: https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/raw/main/LICENSE
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- generated_from_trainer
model-index:
- name: lightblue/suzume-llama-3-8B-multilingual
results: []
---
**Exllamav2** quant (**exl2** / **6.5 bpw**) made with ExLlamaV2 v0.0.21
<p align="center">
<img width=400 src="https://cdn-uploads.huggingface.co/production/uploads/64b63f8ad57e02621dc93c8b/kg3QjQOde0X743csGJT-f.png" alt="Suzume - a Japanese tree sparrow"/>
</p>
# Suzume
This Suzume 8B, a multilingual finetune of Llama 3 ([meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)).
Llama 3 has exhibited excellent performance on many English language benchmarks.
However, it also seemingly been finetuned on mostly English data, meaning that it will respond in English, even if prompted in other languages.
We have fine-tuned Llama 3 on almost 90,000 multilingual conversations meaning that this model has the smarts of Llama 3 but has the added ability to chat in more languages.
Please feel free to comment on this model and give us feedback in the Community tab!
We will release a paper in the future describing how we made the training data, the model, and the evaluations we have conducted of it.
# How to use
The easiest way to use this model on your own computer is to use the [GGUF version of this model (lightblue/suzume-llama-3-8B-multilingual-gguf)](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-gguf) using a program such as [jan.ai](https://jan.ai/) or [LM Studio](https://lmstudio.ai/).
If you want to use this model directly in Python, we recommend using vLLM for the fastest inference speeds.
```python
from vllm import LLM, SamplingParams
sampling_params = SamplingParams(temperature=0.0, max_tokens=100)
llm = LLM(model="lightblue/suzume-llama-3-8B-multilingual")
messages = []
messages.append({"role": "user", "content": "Bonjour!"})
prompt = llm.llm_engine.tokenizer.tokenizer.apply_chat_template(conversation=messages, add_generation_prompt=True, tokenize=False)
prompts = [prompt]
outputs = llm.generate(prompts, sampling_params)
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
# Evaluation scores
We achieve the following MT-Bench scores across 6 languages:
| | **meta-llama/Meta-Llama-3-8B-Instruct** | **lightblue/suzume-llama-3-8B-multilingual** | **Nexusflow/Starling-LM-7B-beta** | **gpt-3.5-turbo** |
|-----------------|-----------------------------------------|----------------------------------------------|-----------------------------------|-------------------|
| **German** 🇩🇪 | NaN | 7.26 | 6.99 | 7.68 |
| **French** 🇫🇷 | NaN | 7.66 | 7.29 | 7.74 |
| **Japanese** 🇯🇵 | NaN | 6.56 | 6.22 | 7.84 |
| **Russian** 🇷🇺 * | NaN | 8.19 | 8.28 | 7.94 |
| **Chinese** 🇨🇳 | NaN | 7.11 | 6.97 | 7.55 |
| **English** 🇺🇸 | 7.98 | 7.73 | 7.92 | 8.26 |
\* (Note the Russian scores exclude code, reasoning and math problems due to not having any translated reference answers for these questions.)
We observe minimal degredation of Llama 3's English ability while achieving best-in-class multilingual abilities compared to the top rated 7B model ([Nexusflow/Starling-LM-7B-beta](https://huggingface.co/Nexusflow/Starling-LM-7B-beta)) on the [Chatbot Arena Leaderboard](https://chat.lmsys.org/?leaderboard).
[Here is our evaluation script.](https://drive.google.com/file/d/15HPn7452t8LbTD9HKSl7ngYYWnsoOG08/view?usp=sharing)
# Training data
We train on three sources of data to create this model:
* [lightblue/tagengo-gpt4](https://huggingface.co/datasets/lightblue/tagengo-gpt4) - 76,338 conversations
* A diverse dataset of initial inputs sampled from [lmsys/lmsys-chat-1m](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) and then used to prompt `gpt-4-0125-preview`
* [megagonlabs/instruction_ja](https://github.com/megagonlabs/instruction_ja) - 669 conversations
* A hand-edited dataset of nearly 700 Japanese conversations taken originally from translations of the [kunishou/hh-rlhf-49k-ja](https://huggingface.co/datasets/kunishou/hh-rlhf-49k-ja) dataset.
* [openchat/openchat_sharegpt4_dataset](https://huggingface.co/datasets/openchat/openchat_sharegpt4_dataset/resolve/main/sharegpt_gpt4.json) - 6,206 conversations
* Multilingual conversations of humans talking to GPT-4.
<details><summary>We prepare our data like so:</summary>
```python
import pandas as pd
from datasets import Dataset, load_dataset, concatenate_datasets
### Tagengo
gpt4_dataset = load_dataset("lightblue/tagengo-gpt4", split="train")
gpt4_dataset = gpt4_dataset.filter(lambda x: x["response"][1] == "stop")
####
### Megagon
megagon_df = pd.read_json(
"https://raw.githubusercontent.com/megagonlabs/instruction_ja/main/data/data.jsonl",
lines=True,
orient="records"
)
role_map = {"user": "human", "agent": "gpt"}
megagon_df["conversations"] = megagon_df.utterances.apply(lambda x: [{"from": role_map[y["name"]], "value": y["text"]} for y in x])
megagon_df["language"] = "Japanese"
megagon_df = megagon_df[["conversations", "language"]]
megagon_dataset = Dataset.from_pandas(df)
###
### Openchat
openchat_df = pd.read_json("https://huggingface.co/datasets/openchat/openchat_sharegpt4_dataset/resolve/main/sharegpt_gpt4.json?download=true")
openchat_df["conversations"] = openchat_df["items"]
openchat_dataset = Dataset.from_pandas(openchat_df)
###
dataset = concatenate_datasets([gpt4_dataset, megagon_dataset, openchat_dataset])
dataset = dataset.filter(lambda x: not any([y["value"] is None for y in x["conversations"]]))
dataset.select_columns(["conversations"]).to_json("/workspace/llm_training/axolotl/llama3-multilingual/tagengo_openchat_megagon.json")
```
</details>
<br/>
# workspace/llm_training/axolotl/llama3-multilingual/output_tagengo_openchat_megagon_8B_llama3
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the above described dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6595
## Training procedure
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: meta-llama/Meta-Llama-3-8B-Instruct
model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer # PreTrainedTokenizerFast
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: /workspace/llm_training/axolotl/llama3-multilingual/tagengo_openchat_megagon.json
ds_type: json # see other options below
type: sharegpt
conversation: llama-3
dataset_prepared_path: /workspace/llm_training/axolotl/llama3-multilingual/prepared_tagengo_openchat_megagon
val_set_size: 0.01
output_dir: /workspace/llm_training/axolotl/llama3-multilingual/output_tagengo_openchat_megagon_8B_llama3
sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
use_wandb: true
wandb_project: wandb_project
wandb_entity: wandb_entity
wandb_name: wandb_name
gradient_accumulation_steps: 2
micro_batch_size: 2
num_epochs: 1
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 1e-5
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
early_stopping_patience:
resume_from_checkpoint:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 5
eval_table_size:
saves_per_epoch: 1
debug:
deepspeed: /workspace/axolotl/deepspeed_configs/zero2.json
weight_decay: 0.0
special_tokens:
pad_token: <|end_of_text|>
```
</details><br>
<details><summary>Note - we added this Llama 3 template to fastchat directly as the Llama 3 chat template was not supported when we trained this model.</summary>
```python
from fastchat.conversation import Conversation
from fastchat.conversation import register_conv_template
from fastchat.conversation import SeparatorStyle
register_conv_template(
Conversation(
name="llama-3",
system_template="<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n{system_message}",
roles=("<|start_header_id|>user<|end_header_id|>\n", "<|start_header_id|>assistant<|end_header_id|>\n"),
sep_style=SeparatorStyle.ADD_NEW_LINE_SINGLE,
sep="<|eot_id|>",
stop_token_ids=[128009],
stop_str="<|eot_id|>",
)
)
```
</details><br>
### Training hyperparameters
This model was trained using 4 x A100 (80GB) for roughly 2.5 hours.
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1894 | 0.0 | 1 | 1.0110 |
| 0.8493 | 0.2 | 73 | 0.7057 |
| 0.8047 | 0.4 | 146 | 0.6835 |
| 0.7644 | 0.6 | 219 | 0.6687 |
| 0.7528 | 0.8 | 292 | 0.6615 |
| 0.7794 | 1.0 | 365 | 0.6595 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.0
# Developer
Peter Devine - ([ptrdvn](https://huggingface.co/ptrdvn))
|
TinyPixel/danube-chatml | TinyPixel | 2024-05-18T04:28:37Z | 156 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-18T04:25:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vuongnhathien/test-10-image | vuongnhathien | 2024-05-18T04:19:02Z | 272 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-18T04:18:08Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: test-10-image
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-10-image
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.5151
- Accuracy: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 4.7400 | 0.0 |
| No log | 2.0 | 2 | 4.5670 | 0.0 |
| No log | 3.0 | 3 | 4.5151 | 0.0 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2+cpu
- Datasets 2.18.0
- Tokenizers 0.15.2
|
shkna1368/mt5-small-finetuned-mt5-small-poem7b | shkna1368 | 2024-05-18T04:18:59Z | 116 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-18T04:12:51Z | ---
license: apache-2.0
base_model: google/mt5-small
tags:
- generated_from_trainer
model-index:
- name: mt5-small-finetuned-mt5-small-poem7b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-mt5-small-poem7b
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 254 | nan |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
vuongnhathien/my_awesome_food_model | vuongnhathien | 2024-05-18T04:16:36Z | 269 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-17T16:06:33Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_food_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.5151
- Accuracy: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 4.7400 | 0.0 |
| No log | 2.0 | 2 | 4.5670 | 0.0 |
| No log | 3.0 | 3 | 4.5151 | 0.0 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2+cpu
- Datasets 2.18.0
- Tokenizers 0.15.2
|
MOADdev/multilingual-e5-large-amethyst | MOADdev | 2024-05-18T04:05:05Z | 12 | 3 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-05-18T03:59:32Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# nizamovtimur/multilingual-e5-large-amethyst
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('nizamovtimur/multilingual-e5-large-amethyst')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=nizamovtimur/multilingual-e5-large-amethyst)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 386 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 12,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 7404,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Zoyd/suzume-llama-3-8B-multilingual-4_25bpw-exl2 | Zoyd | 2024-05-18T04:01:35Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:quantized:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2024-05-18T03:45:59Z | ---
license: other
license_name: llama-3
license_link: https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/raw/main/LICENSE
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- generated_from_trainer
model-index:
- name: lightblue/suzume-llama-3-8B-multilingual
results: []
---
**Exllamav2** quant (**exl2** / **4.25 bpw**) made with ExLlamaV2 v0.0.21
<p align="center">
<img width=400 src="https://cdn-uploads.huggingface.co/production/uploads/64b63f8ad57e02621dc93c8b/kg3QjQOde0X743csGJT-f.png" alt="Suzume - a Japanese tree sparrow"/>
</p>
# Suzume
This Suzume 8B, a multilingual finetune of Llama 3 ([meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)).
Llama 3 has exhibited excellent performance on many English language benchmarks.
However, it also seemingly been finetuned on mostly English data, meaning that it will respond in English, even if prompted in other languages.
We have fine-tuned Llama 3 on almost 90,000 multilingual conversations meaning that this model has the smarts of Llama 3 but has the added ability to chat in more languages.
Please feel free to comment on this model and give us feedback in the Community tab!
We will release a paper in the future describing how we made the training data, the model, and the evaluations we have conducted of it.
# How to use
The easiest way to use this model on your own computer is to use the [GGUF version of this model (lightblue/suzume-llama-3-8B-multilingual-gguf)](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-gguf) using a program such as [jan.ai](https://jan.ai/) or [LM Studio](https://lmstudio.ai/).
If you want to use this model directly in Python, we recommend using vLLM for the fastest inference speeds.
```python
from vllm import LLM, SamplingParams
sampling_params = SamplingParams(temperature=0.0, max_tokens=100)
llm = LLM(model="lightblue/suzume-llama-3-8B-multilingual")
messages = []
messages.append({"role": "user", "content": "Bonjour!"})
prompt = llm.llm_engine.tokenizer.tokenizer.apply_chat_template(conversation=messages, add_generation_prompt=True, tokenize=False)
prompts = [prompt]
outputs = llm.generate(prompts, sampling_params)
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
# Evaluation scores
We achieve the following MT-Bench scores across 6 languages:
| | **meta-llama/Meta-Llama-3-8B-Instruct** | **lightblue/suzume-llama-3-8B-multilingual** | **Nexusflow/Starling-LM-7B-beta** | **gpt-3.5-turbo** |
|-----------------|-----------------------------------------|----------------------------------------------|-----------------------------------|-------------------|
| **German** 🇩🇪 | NaN | 7.26 | 6.99 | 7.68 |
| **French** 🇫🇷 | NaN | 7.66 | 7.29 | 7.74 |
| **Japanese** 🇯🇵 | NaN | 6.56 | 6.22 | 7.84 |
| **Russian** 🇷🇺 * | NaN | 8.19 | 8.28 | 7.94 |
| **Chinese** 🇨🇳 | NaN | 7.11 | 6.97 | 7.55 |
| **English** 🇺🇸 | 7.98 | 7.73 | 7.92 | 8.26 |
\* (Note the Russian scores exclude code, reasoning and math problems due to not having any translated reference answers for these questions.)
We observe minimal degredation of Llama 3's English ability while achieving best-in-class multilingual abilities compared to the top rated 7B model ([Nexusflow/Starling-LM-7B-beta](https://huggingface.co/Nexusflow/Starling-LM-7B-beta)) on the [Chatbot Arena Leaderboard](https://chat.lmsys.org/?leaderboard).
[Here is our evaluation script.](https://drive.google.com/file/d/15HPn7452t8LbTD9HKSl7ngYYWnsoOG08/view?usp=sharing)
# Training data
We train on three sources of data to create this model:
* [lightblue/tagengo-gpt4](https://huggingface.co/datasets/lightblue/tagengo-gpt4) - 76,338 conversations
* A diverse dataset of initial inputs sampled from [lmsys/lmsys-chat-1m](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) and then used to prompt `gpt-4-0125-preview`
* [megagonlabs/instruction_ja](https://github.com/megagonlabs/instruction_ja) - 669 conversations
* A hand-edited dataset of nearly 700 Japanese conversations taken originally from translations of the [kunishou/hh-rlhf-49k-ja](https://huggingface.co/datasets/kunishou/hh-rlhf-49k-ja) dataset.
* [openchat/openchat_sharegpt4_dataset](https://huggingface.co/datasets/openchat/openchat_sharegpt4_dataset/resolve/main/sharegpt_gpt4.json) - 6,206 conversations
* Multilingual conversations of humans talking to GPT-4.
<details><summary>We prepare our data like so:</summary>
```python
import pandas as pd
from datasets import Dataset, load_dataset, concatenate_datasets
### Tagengo
gpt4_dataset = load_dataset("lightblue/tagengo-gpt4", split="train")
gpt4_dataset = gpt4_dataset.filter(lambda x: x["response"][1] == "stop")
####
### Megagon
megagon_df = pd.read_json(
"https://raw.githubusercontent.com/megagonlabs/instruction_ja/main/data/data.jsonl",
lines=True,
orient="records"
)
role_map = {"user": "human", "agent": "gpt"}
megagon_df["conversations"] = megagon_df.utterances.apply(lambda x: [{"from": role_map[y["name"]], "value": y["text"]} for y in x])
megagon_df["language"] = "Japanese"
megagon_df = megagon_df[["conversations", "language"]]
megagon_dataset = Dataset.from_pandas(df)
###
### Openchat
openchat_df = pd.read_json("https://huggingface.co/datasets/openchat/openchat_sharegpt4_dataset/resolve/main/sharegpt_gpt4.json?download=true")
openchat_df["conversations"] = openchat_df["items"]
openchat_dataset = Dataset.from_pandas(openchat_df)
###
dataset = concatenate_datasets([gpt4_dataset, megagon_dataset, openchat_dataset])
dataset = dataset.filter(lambda x: not any([y["value"] is None for y in x["conversations"]]))
dataset.select_columns(["conversations"]).to_json("/workspace/llm_training/axolotl/llama3-multilingual/tagengo_openchat_megagon.json")
```
</details>
<br/>
# workspace/llm_training/axolotl/llama3-multilingual/output_tagengo_openchat_megagon_8B_llama3
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the above described dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6595
## Training procedure
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: meta-llama/Meta-Llama-3-8B-Instruct
model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer # PreTrainedTokenizerFast
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: /workspace/llm_training/axolotl/llama3-multilingual/tagengo_openchat_megagon.json
ds_type: json # see other options below
type: sharegpt
conversation: llama-3
dataset_prepared_path: /workspace/llm_training/axolotl/llama3-multilingual/prepared_tagengo_openchat_megagon
val_set_size: 0.01
output_dir: /workspace/llm_training/axolotl/llama3-multilingual/output_tagengo_openchat_megagon_8B_llama3
sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
use_wandb: true
wandb_project: wandb_project
wandb_entity: wandb_entity
wandb_name: wandb_name
gradient_accumulation_steps: 2
micro_batch_size: 2
num_epochs: 1
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 1e-5
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
early_stopping_patience:
resume_from_checkpoint:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 5
eval_table_size:
saves_per_epoch: 1
debug:
deepspeed: /workspace/axolotl/deepspeed_configs/zero2.json
weight_decay: 0.0
special_tokens:
pad_token: <|end_of_text|>
```
</details><br>
<details><summary>Note - we added this Llama 3 template to fastchat directly as the Llama 3 chat template was not supported when we trained this model.</summary>
```python
from fastchat.conversation import Conversation
from fastchat.conversation import register_conv_template
from fastchat.conversation import SeparatorStyle
register_conv_template(
Conversation(
name="llama-3",
system_template="<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n{system_message}",
roles=("<|start_header_id|>user<|end_header_id|>\n", "<|start_header_id|>assistant<|end_header_id|>\n"),
sep_style=SeparatorStyle.ADD_NEW_LINE_SINGLE,
sep="<|eot_id|>",
stop_token_ids=[128009],
stop_str="<|eot_id|>",
)
)
```
</details><br>
### Training hyperparameters
This model was trained using 4 x A100 (80GB) for roughly 2.5 hours.
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1894 | 0.0 | 1 | 1.0110 |
| 0.8493 | 0.2 | 73 | 0.7057 |
| 0.8047 | 0.4 | 146 | 0.6835 |
| 0.7644 | 0.6 | 219 | 0.6687 |
| 0.7528 | 0.8 | 292 | 0.6615 |
| 0.7794 | 1.0 | 365 | 0.6595 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.0
# Developer
Peter Devine - ([ptrdvn](https://huggingface.co/ptrdvn))
|
moiseserg/llama3-unslot_finetuned | moiseserg | 2024-05-18T04:01:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-18T03:46:45Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** moiseserg
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Zoyd/suzume-llama-3-8B-multilingual-4_0bpw-exl2 | Zoyd | 2024-05-18T04:00:45Z | 11 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:quantized:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"exl2",
"region:us"
] | text-generation | 2024-05-18T03:45:31Z | ---
license: other
license_name: llama-3
license_link: https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/raw/main/LICENSE
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- generated_from_trainer
model-index:
- name: lightblue/suzume-llama-3-8B-multilingual
results: []
---
**Exllamav2** quant (**exl2** / **4.0 bpw**) made with ExLlamaV2 v0.0.21
<p align="center">
<img width=400 src="https://cdn-uploads.huggingface.co/production/uploads/64b63f8ad57e02621dc93c8b/kg3QjQOde0X743csGJT-f.png" alt="Suzume - a Japanese tree sparrow"/>
</p>
# Suzume
This Suzume 8B, a multilingual finetune of Llama 3 ([meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)).
Llama 3 has exhibited excellent performance on many English language benchmarks.
However, it also seemingly been finetuned on mostly English data, meaning that it will respond in English, even if prompted in other languages.
We have fine-tuned Llama 3 on almost 90,000 multilingual conversations meaning that this model has the smarts of Llama 3 but has the added ability to chat in more languages.
Please feel free to comment on this model and give us feedback in the Community tab!
We will release a paper in the future describing how we made the training data, the model, and the evaluations we have conducted of it.
# How to use
The easiest way to use this model on your own computer is to use the [GGUF version of this model (lightblue/suzume-llama-3-8B-multilingual-gguf)](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-gguf) using a program such as [jan.ai](https://jan.ai/) or [LM Studio](https://lmstudio.ai/).
If you want to use this model directly in Python, we recommend using vLLM for the fastest inference speeds.
```python
from vllm import LLM, SamplingParams
sampling_params = SamplingParams(temperature=0.0, max_tokens=100)
llm = LLM(model="lightblue/suzume-llama-3-8B-multilingual")
messages = []
messages.append({"role": "user", "content": "Bonjour!"})
prompt = llm.llm_engine.tokenizer.tokenizer.apply_chat_template(conversation=messages, add_generation_prompt=True, tokenize=False)
prompts = [prompt]
outputs = llm.generate(prompts, sampling_params)
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
# Evaluation scores
We achieve the following MT-Bench scores across 6 languages:
| | **meta-llama/Meta-Llama-3-8B-Instruct** | **lightblue/suzume-llama-3-8B-multilingual** | **Nexusflow/Starling-LM-7B-beta** | **gpt-3.5-turbo** |
|-----------------|-----------------------------------------|----------------------------------------------|-----------------------------------|-------------------|
| **German** 🇩🇪 | NaN | 7.26 | 6.99 | 7.68 |
| **French** 🇫🇷 | NaN | 7.66 | 7.29 | 7.74 |
| **Japanese** 🇯🇵 | NaN | 6.56 | 6.22 | 7.84 |
| **Russian** 🇷🇺 * | NaN | 8.19 | 8.28 | 7.94 |
| **Chinese** 🇨🇳 | NaN | 7.11 | 6.97 | 7.55 |
| **English** 🇺🇸 | 7.98 | 7.73 | 7.92 | 8.26 |
\* (Note the Russian scores exclude code, reasoning and math problems due to not having any translated reference answers for these questions.)
We observe minimal degredation of Llama 3's English ability while achieving best-in-class multilingual abilities compared to the top rated 7B model ([Nexusflow/Starling-LM-7B-beta](https://huggingface.co/Nexusflow/Starling-LM-7B-beta)) on the [Chatbot Arena Leaderboard](https://chat.lmsys.org/?leaderboard).
[Here is our evaluation script.](https://drive.google.com/file/d/15HPn7452t8LbTD9HKSl7ngYYWnsoOG08/view?usp=sharing)
# Training data
We train on three sources of data to create this model:
* [lightblue/tagengo-gpt4](https://huggingface.co/datasets/lightblue/tagengo-gpt4) - 76,338 conversations
* A diverse dataset of initial inputs sampled from [lmsys/lmsys-chat-1m](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) and then used to prompt `gpt-4-0125-preview`
* [megagonlabs/instruction_ja](https://github.com/megagonlabs/instruction_ja) - 669 conversations
* A hand-edited dataset of nearly 700 Japanese conversations taken originally from translations of the [kunishou/hh-rlhf-49k-ja](https://huggingface.co/datasets/kunishou/hh-rlhf-49k-ja) dataset.
* [openchat/openchat_sharegpt4_dataset](https://huggingface.co/datasets/openchat/openchat_sharegpt4_dataset/resolve/main/sharegpt_gpt4.json) - 6,206 conversations
* Multilingual conversations of humans talking to GPT-4.
<details><summary>We prepare our data like so:</summary>
```python
import pandas as pd
from datasets import Dataset, load_dataset, concatenate_datasets
### Tagengo
gpt4_dataset = load_dataset("lightblue/tagengo-gpt4", split="train")
gpt4_dataset = gpt4_dataset.filter(lambda x: x["response"][1] == "stop")
####
### Megagon
megagon_df = pd.read_json(
"https://raw.githubusercontent.com/megagonlabs/instruction_ja/main/data/data.jsonl",
lines=True,
orient="records"
)
role_map = {"user": "human", "agent": "gpt"}
megagon_df["conversations"] = megagon_df.utterances.apply(lambda x: [{"from": role_map[y["name"]], "value": y["text"]} for y in x])
megagon_df["language"] = "Japanese"
megagon_df = megagon_df[["conversations", "language"]]
megagon_dataset = Dataset.from_pandas(df)
###
### Openchat
openchat_df = pd.read_json("https://huggingface.co/datasets/openchat/openchat_sharegpt4_dataset/resolve/main/sharegpt_gpt4.json?download=true")
openchat_df["conversations"] = openchat_df["items"]
openchat_dataset = Dataset.from_pandas(openchat_df)
###
dataset = concatenate_datasets([gpt4_dataset, megagon_dataset, openchat_dataset])
dataset = dataset.filter(lambda x: not any([y["value"] is None for y in x["conversations"]]))
dataset.select_columns(["conversations"]).to_json("/workspace/llm_training/axolotl/llama3-multilingual/tagengo_openchat_megagon.json")
```
</details>
<br/>
# workspace/llm_training/axolotl/llama3-multilingual/output_tagengo_openchat_megagon_8B_llama3
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the above described dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6595
## Training procedure
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: meta-llama/Meta-Llama-3-8B-Instruct
model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer # PreTrainedTokenizerFast
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: /workspace/llm_training/axolotl/llama3-multilingual/tagengo_openchat_megagon.json
ds_type: json # see other options below
type: sharegpt
conversation: llama-3
dataset_prepared_path: /workspace/llm_training/axolotl/llama3-multilingual/prepared_tagengo_openchat_megagon
val_set_size: 0.01
output_dir: /workspace/llm_training/axolotl/llama3-multilingual/output_tagengo_openchat_megagon_8B_llama3
sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
use_wandb: true
wandb_project: wandb_project
wandb_entity: wandb_entity
wandb_name: wandb_name
gradient_accumulation_steps: 2
micro_batch_size: 2
num_epochs: 1
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 1e-5
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
early_stopping_patience:
resume_from_checkpoint:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 5
eval_table_size:
saves_per_epoch: 1
debug:
deepspeed: /workspace/axolotl/deepspeed_configs/zero2.json
weight_decay: 0.0
special_tokens:
pad_token: <|end_of_text|>
```
</details><br>
<details><summary>Note - we added this Llama 3 template to fastchat directly as the Llama 3 chat template was not supported when we trained this model.</summary>
```python
from fastchat.conversation import Conversation
from fastchat.conversation import register_conv_template
from fastchat.conversation import SeparatorStyle
register_conv_template(
Conversation(
name="llama-3",
system_template="<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n{system_message}",
roles=("<|start_header_id|>user<|end_header_id|>\n", "<|start_header_id|>assistant<|end_header_id|>\n"),
sep_style=SeparatorStyle.ADD_NEW_LINE_SINGLE,
sep="<|eot_id|>",
stop_token_ids=[128009],
stop_str="<|eot_id|>",
)
)
```
</details><br>
### Training hyperparameters
This model was trained using 4 x A100 (80GB) for roughly 2.5 hours.
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1894 | 0.0 | 1 | 1.0110 |
| 0.8493 | 0.2 | 73 | 0.7057 |
| 0.8047 | 0.4 | 146 | 0.6835 |
| 0.7644 | 0.6 | 219 | 0.6687 |
| 0.7528 | 0.8 | 292 | 0.6615 |
| 0.7794 | 1.0 | 365 | 0.6595 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.0
# Developer
Peter Devine - ([ptrdvn](https://huggingface.co/ptrdvn))
|
Zoyd/suzume-llama-3-8B-multilingual-3_75bpw-exl2 | Zoyd | 2024-05-18T03:59:18Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:quantized:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2024-05-18T03:45:12Z | ---
license: other
license_name: llama-3
license_link: https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/raw/main/LICENSE
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- generated_from_trainer
model-index:
- name: lightblue/suzume-llama-3-8B-multilingual
results: []
---
**Exllamav2** quant (**exl2** / **3.75 bpw**) made with ExLlamaV2 v0.0.21
<p align="center">
<img width=400 src="https://cdn-uploads.huggingface.co/production/uploads/64b63f8ad57e02621dc93c8b/kg3QjQOde0X743csGJT-f.png" alt="Suzume - a Japanese tree sparrow"/>
</p>
# Suzume
This Suzume 8B, a multilingual finetune of Llama 3 ([meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)).
Llama 3 has exhibited excellent performance on many English language benchmarks.
However, it also seemingly been finetuned on mostly English data, meaning that it will respond in English, even if prompted in other languages.
We have fine-tuned Llama 3 on almost 90,000 multilingual conversations meaning that this model has the smarts of Llama 3 but has the added ability to chat in more languages.
Please feel free to comment on this model and give us feedback in the Community tab!
We will release a paper in the future describing how we made the training data, the model, and the evaluations we have conducted of it.
# How to use
The easiest way to use this model on your own computer is to use the [GGUF version of this model (lightblue/suzume-llama-3-8B-multilingual-gguf)](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-gguf) using a program such as [jan.ai](https://jan.ai/) or [LM Studio](https://lmstudio.ai/).
If you want to use this model directly in Python, we recommend using vLLM for the fastest inference speeds.
```python
from vllm import LLM, SamplingParams
sampling_params = SamplingParams(temperature=0.0, max_tokens=100)
llm = LLM(model="lightblue/suzume-llama-3-8B-multilingual")
messages = []
messages.append({"role": "user", "content": "Bonjour!"})
prompt = llm.llm_engine.tokenizer.tokenizer.apply_chat_template(conversation=messages, add_generation_prompt=True, tokenize=False)
prompts = [prompt]
outputs = llm.generate(prompts, sampling_params)
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
# Evaluation scores
We achieve the following MT-Bench scores across 6 languages:
| | **meta-llama/Meta-Llama-3-8B-Instruct** | **lightblue/suzume-llama-3-8B-multilingual** | **Nexusflow/Starling-LM-7B-beta** | **gpt-3.5-turbo** |
|-----------------|-----------------------------------------|----------------------------------------------|-----------------------------------|-------------------|
| **German** 🇩🇪 | NaN | 7.26 | 6.99 | 7.68 |
| **French** 🇫🇷 | NaN | 7.66 | 7.29 | 7.74 |
| **Japanese** 🇯🇵 | NaN | 6.56 | 6.22 | 7.84 |
| **Russian** 🇷🇺 * | NaN | 8.19 | 8.28 | 7.94 |
| **Chinese** 🇨🇳 | NaN | 7.11 | 6.97 | 7.55 |
| **English** 🇺🇸 | 7.98 | 7.73 | 7.92 | 8.26 |
\* (Note the Russian scores exclude code, reasoning and math problems due to not having any translated reference answers for these questions.)
We observe minimal degredation of Llama 3's English ability while achieving best-in-class multilingual abilities compared to the top rated 7B model ([Nexusflow/Starling-LM-7B-beta](https://huggingface.co/Nexusflow/Starling-LM-7B-beta)) on the [Chatbot Arena Leaderboard](https://chat.lmsys.org/?leaderboard).
[Here is our evaluation script.](https://drive.google.com/file/d/15HPn7452t8LbTD9HKSl7ngYYWnsoOG08/view?usp=sharing)
# Training data
We train on three sources of data to create this model:
* [lightblue/tagengo-gpt4](https://huggingface.co/datasets/lightblue/tagengo-gpt4) - 76,338 conversations
* A diverse dataset of initial inputs sampled from [lmsys/lmsys-chat-1m](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) and then used to prompt `gpt-4-0125-preview`
* [megagonlabs/instruction_ja](https://github.com/megagonlabs/instruction_ja) - 669 conversations
* A hand-edited dataset of nearly 700 Japanese conversations taken originally from translations of the [kunishou/hh-rlhf-49k-ja](https://huggingface.co/datasets/kunishou/hh-rlhf-49k-ja) dataset.
* [openchat/openchat_sharegpt4_dataset](https://huggingface.co/datasets/openchat/openchat_sharegpt4_dataset/resolve/main/sharegpt_gpt4.json) - 6,206 conversations
* Multilingual conversations of humans talking to GPT-4.
<details><summary>We prepare our data like so:</summary>
```python
import pandas as pd
from datasets import Dataset, load_dataset, concatenate_datasets
### Tagengo
gpt4_dataset = load_dataset("lightblue/tagengo-gpt4", split="train")
gpt4_dataset = gpt4_dataset.filter(lambda x: x["response"][1] == "stop")
####
### Megagon
megagon_df = pd.read_json(
"https://raw.githubusercontent.com/megagonlabs/instruction_ja/main/data/data.jsonl",
lines=True,
orient="records"
)
role_map = {"user": "human", "agent": "gpt"}
megagon_df["conversations"] = megagon_df.utterances.apply(lambda x: [{"from": role_map[y["name"]], "value": y["text"]} for y in x])
megagon_df["language"] = "Japanese"
megagon_df = megagon_df[["conversations", "language"]]
megagon_dataset = Dataset.from_pandas(df)
###
### Openchat
openchat_df = pd.read_json("https://huggingface.co/datasets/openchat/openchat_sharegpt4_dataset/resolve/main/sharegpt_gpt4.json?download=true")
openchat_df["conversations"] = openchat_df["items"]
openchat_dataset = Dataset.from_pandas(openchat_df)
###
dataset = concatenate_datasets([gpt4_dataset, megagon_dataset, openchat_dataset])
dataset = dataset.filter(lambda x: not any([y["value"] is None for y in x["conversations"]]))
dataset.select_columns(["conversations"]).to_json("/workspace/llm_training/axolotl/llama3-multilingual/tagengo_openchat_megagon.json")
```
</details>
<br/>
# workspace/llm_training/axolotl/llama3-multilingual/output_tagengo_openchat_megagon_8B_llama3
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the above described dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6595
## Training procedure
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: meta-llama/Meta-Llama-3-8B-Instruct
model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer # PreTrainedTokenizerFast
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: /workspace/llm_training/axolotl/llama3-multilingual/tagengo_openchat_megagon.json
ds_type: json # see other options below
type: sharegpt
conversation: llama-3
dataset_prepared_path: /workspace/llm_training/axolotl/llama3-multilingual/prepared_tagengo_openchat_megagon
val_set_size: 0.01
output_dir: /workspace/llm_training/axolotl/llama3-multilingual/output_tagengo_openchat_megagon_8B_llama3
sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
use_wandb: true
wandb_project: wandb_project
wandb_entity: wandb_entity
wandb_name: wandb_name
gradient_accumulation_steps: 2
micro_batch_size: 2
num_epochs: 1
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 1e-5
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
early_stopping_patience:
resume_from_checkpoint:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 5
eval_table_size:
saves_per_epoch: 1
debug:
deepspeed: /workspace/axolotl/deepspeed_configs/zero2.json
weight_decay: 0.0
special_tokens:
pad_token: <|end_of_text|>
```
</details><br>
<details><summary>Note - we added this Llama 3 template to fastchat directly as the Llama 3 chat template was not supported when we trained this model.</summary>
```python
from fastchat.conversation import Conversation
from fastchat.conversation import register_conv_template
from fastchat.conversation import SeparatorStyle
register_conv_template(
Conversation(
name="llama-3",
system_template="<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n{system_message}",
roles=("<|start_header_id|>user<|end_header_id|>\n", "<|start_header_id|>assistant<|end_header_id|>\n"),
sep_style=SeparatorStyle.ADD_NEW_LINE_SINGLE,
sep="<|eot_id|>",
stop_token_ids=[128009],
stop_str="<|eot_id|>",
)
)
```
</details><br>
### Training hyperparameters
This model was trained using 4 x A100 (80GB) for roughly 2.5 hours.
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1894 | 0.0 | 1 | 1.0110 |
| 0.8493 | 0.2 | 73 | 0.7057 |
| 0.8047 | 0.4 | 146 | 0.6835 |
| 0.7644 | 0.6 | 219 | 0.6687 |
| 0.7528 | 0.8 | 292 | 0.6615 |
| 0.7794 | 1.0 | 365 | 0.6595 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.0
# Developer
Peter Devine - ([ptrdvn](https://huggingface.co/ptrdvn))
|
Ksgk-fy/alignment-adaptor-test02 | Ksgk-fy | 2024-05-18T03:49:23Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"llama",
"trl",
"sft",
"generated_from_trainer",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:adapter:HuggingFaceH4/zephyr-7b-beta",
"license:mit",
"region:us"
] | null | 2024-03-31T10:39:49Z | ---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: HuggingFaceH4/zephyr-7b-beta
model-index:
- name: alignment-adaptor-test02
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# alignment-adaptor-test02
This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 10
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.2
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
Amyye/Amy_awesome_qa_model | Amyye | 2024-05-18T03:48:56Z | 62 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-05-17T22:42:15Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: Amy_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Amy_awesome_qa_model
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.40.2
- TensorFlow 2.15.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
tabbas97/distilbert-base-uncased-finetuned-pubmed-lora-trained-tabbas97 | tabbas97 | 2024-05-18T03:47:46Z | 3 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"dataset:pubmed-summarization",
"base_model:distilbert/distilbert-base-uncased",
"base_model:adapter:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2024-05-17T22:06:36Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: distilbert-base-uncased
datasets:
- pubmed-summarization
model-index:
- name: distilbert-base-uncased-finetuned-pubmed-lora-trained-tabbas97
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-pubmed-lora-trained-tabbas97
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the pubmed-summarization dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9256
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.1986 | 0.1667 | 500 | 2.0156 |
| 2.1414 | 0.3334 | 1000 | 1.9893 |
| 2.1247 | 0.5002 | 1500 | 1.9770 |
| 2.1106 | 0.6669 | 2000 | 1.9640 |
| 2.103 | 0.8336 | 2500 | 1.9548 |
| 2.0974 | 1.0003 | 3000 | 1.9519 |
| 2.0874 | 1.1671 | 3500 | 1.9506 |
| 2.0842 | 1.3338 | 4000 | 1.9470 |
| 2.0799 | 1.5005 | 4500 | 1.9406 |
| 2.0781 | 1.6672 | 5000 | 1.9363 |
| 2.0763 | 1.8339 | 5500 | 1.9371 |
| 2.0664 | 2.0007 | 6000 | 1.9311 |
| 2.0717 | 2.1674 | 6500 | 1.9277 |
| 2.0683 | 2.3341 | 7000 | 1.9247 |
| 2.0622 | 2.5008 | 7500 | 1.9290 |
| 2.0614 | 2.6676 | 8000 | 1.9170 |
| 2.0614 | 2.8343 | 8500 | 1.9239 |
| 2.0646 | 3.0010 | 9000 | 1.9211 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
abc88767/2c100 | abc88767 | 2024-05-18T03:45:36Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-17T22:16:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
apwic/sentiment-lora-r8a2d0.15-0 | apwic | 2024-05-18T03:41:13Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"id",
"base_model:indolem/indobert-base-uncased",
"base_model:finetune:indolem/indobert-base-uncased",
"license:mit",
"region:us"
] | null | 2024-05-18T03:07:58Z | ---
language:
- id
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: sentiment-lora-r8a2d0.15-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-lora-r8a2d0.15-0
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3217
- Accuracy: 0.8622
- Precision: 0.8326
- Recall: 0.8375
- F1: 0.8349
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 30
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.5593 | 1.0 | 122 | 0.5026 | 0.7268 | 0.6658 | 0.6542 | 0.6589 |
| 0.4995 | 2.0 | 244 | 0.4797 | 0.7544 | 0.7149 | 0.7412 | 0.7226 |
| 0.4612 | 3.0 | 366 | 0.4282 | 0.7644 | 0.7199 | 0.7358 | 0.7262 |
| 0.4019 | 4.0 | 488 | 0.3934 | 0.8296 | 0.7949 | 0.7919 | 0.7934 |
| 0.3665 | 5.0 | 610 | 0.4234 | 0.7970 | 0.7618 | 0.7964 | 0.7720 |
| 0.334 | 6.0 | 732 | 0.3723 | 0.8195 | 0.7817 | 0.7973 | 0.7884 |
| 0.3263 | 7.0 | 854 | 0.3704 | 0.8346 | 0.7990 | 0.8230 | 0.8086 |
| 0.3076 | 8.0 | 976 | 0.3521 | 0.8471 | 0.8153 | 0.8168 | 0.8160 |
| 0.298 | 9.0 | 1098 | 0.3522 | 0.8471 | 0.8138 | 0.8243 | 0.8187 |
| 0.2923 | 10.0 | 1220 | 0.3375 | 0.8571 | 0.8289 | 0.8239 | 0.8264 |
| 0.2689 | 11.0 | 1342 | 0.3392 | 0.8622 | 0.8319 | 0.8400 | 0.8357 |
| 0.2686 | 12.0 | 1464 | 0.3484 | 0.8622 | 0.8309 | 0.8450 | 0.8373 |
| 0.2726 | 13.0 | 1586 | 0.3258 | 0.8596 | 0.8316 | 0.8282 | 0.8298 |
| 0.2713 | 14.0 | 1708 | 0.3246 | 0.8622 | 0.8333 | 0.8350 | 0.8341 |
| 0.2577 | 15.0 | 1830 | 0.3307 | 0.8596 | 0.8293 | 0.8357 | 0.8324 |
| 0.2519 | 16.0 | 1952 | 0.3305 | 0.8622 | 0.8314 | 0.8425 | 0.8365 |
| 0.2488 | 17.0 | 2074 | 0.3234 | 0.8546 | 0.8246 | 0.8246 | 0.8246 |
| 0.2546 | 18.0 | 2196 | 0.3247 | 0.8647 | 0.8346 | 0.8442 | 0.8391 |
| 0.2463 | 19.0 | 2318 | 0.3204 | 0.8596 | 0.8307 | 0.8307 | 0.8307 |
| 0.2458 | 20.0 | 2440 | 0.3217 | 0.8622 | 0.8326 | 0.8375 | 0.8349 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
Yntec/DeleteThis | Yntec | 2024-05-18T03:31:13Z | 205 | 0 | diffusers | [
"diffusers",
"safetensors",
"Nothing",
"XpucT",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-05-18T01:57:36Z | ---
license: cc-by-nc-nd-4.0
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Nothing
- XpucT
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
---
# Delete This
My first mistake was making this mix of Deliberate. My second one was to publicly release it.
Samples and prompts:

(Click for larger)
Top left: garbage dump
Top Right: The worlds most delicious burrito, 5 star food, tasty, yummy, detailed, centered, digital painting, artstation, concept art, donato giancola, joseph christian leyendecker, wlop, boris vallejo, breathtaking, 8k resolution, extremely detailed, beautiful, establishing shot, artistic, hyperrealistic, beautiful face, octane render, cinematic lighting, dramatic lighting, masterpiece
Bottom left: analog style 70s color photograph of young Harrison Ford as Han Solo with wife and daughter, star wars behind the scenes
Bottom right: very dirty food, soaking, honey jam pool, spilled milk, burnt clothes, cheese room. (mud)1.2
https://huggingface.co/XpucT/Deliberate |
Ramanen/finetuning-sentiment-model-3000-samples | Ramanen | 2024-05-18T03:10:13Z | 110 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-18T03:06:06Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
jonathanjordan21/mamba | jonathanjordan21 | 2024-05-18T03:00:32Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:state-spaces/mamba-130m-hf",
"base_model:finetune:state-spaces/mamba-130m-hf",
"region:us"
] | null | 2024-05-18T03:00:29Z | ---
base_model: state-spaces/mamba-130m-hf
tags:
- generated_from_trainer
model-index:
- name: mamba
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](None)
# mamba
This model is a fine-tuned version of [state-spaces/mamba-130m-hf](https://huggingface.co/state-spaces/mamba-130m-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.41.0
- Pytorch 2.1.2
- Datasets 2.19.1
- Tokenizers 0.19.1
|
EdgarDev/recetas-roberta | EdgarDev | 2024-05-18T02:58:51Z | 106 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-18T02:03:03Z | ---
license: mit
tags:
- generated_from_trainer
base_model: xlm-roberta-base
model-index:
- name: recetas-roberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recetas-roberta
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1317
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3648 | 1.0 | 730 | 1.1781 |
| 1.2675 | 2.0 | 1460 | 1.1560 |
| 1.2021 | 3.0 | 2190 | 1.1317 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
wahid028/llama3-finetuned-legalQA-unsloth | wahid028 | 2024-05-18T02:49:12Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-18T02:25:17Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MrezaPRZ/codegemma_synthetic_gretel | MrezaPRZ | 2024-05-18T02:36:29Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-18T02:32:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
quirky-lats-at-mats/bio_ga_old_1 | quirky-lats-at-mats | 2024-05-18T02:35:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-18T02:16:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## WMDP Accuracy
wmdp-bio: 0.62
wmdp-cyber: 0.37
wmdp-chem: 0.43
mmlu: 0.61
retraining: no need
## WandB run links
Training: quirky_lats_at_mats/wmdp_lat/sdpctqlb
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
apwic/sentiment-lora-r8a2d0.05-0 | apwic | 2024-05-18T02:34:15Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"id",
"base_model:indolem/indobert-base-uncased",
"base_model:finetune:indolem/indobert-base-uncased",
"license:mit",
"region:us"
] | null | 2024-05-18T02:01:05Z | ---
language:
- id
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: sentiment-lora-r8a2d0.05-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-lora-r8a2d0.05-0
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3260
- Accuracy: 0.8622
- Precision: 0.8319
- Recall: 0.8400
- F1: 0.8357
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 30
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.5609 | 1.0 | 122 | 0.5086 | 0.7193 | 0.6580 | 0.6514 | 0.6543 |
| 0.4986 | 2.0 | 244 | 0.4855 | 0.7494 | 0.7127 | 0.7427 | 0.7201 |
| 0.4593 | 3.0 | 366 | 0.4238 | 0.7694 | 0.7249 | 0.7394 | 0.7309 |
| 0.3957 | 4.0 | 488 | 0.3916 | 0.8070 | 0.7670 | 0.7735 | 0.7700 |
| 0.3658 | 5.0 | 610 | 0.4266 | 0.7995 | 0.7641 | 0.7981 | 0.7744 |
| 0.3345 | 6.0 | 732 | 0.3666 | 0.8371 | 0.8028 | 0.8072 | 0.8049 |
| 0.3237 | 7.0 | 854 | 0.3714 | 0.8396 | 0.8045 | 0.8265 | 0.8136 |
| 0.304 | 8.0 | 976 | 0.3537 | 0.8421 | 0.8083 | 0.8158 | 0.8119 |
| 0.3027 | 9.0 | 1098 | 0.3531 | 0.8446 | 0.8111 | 0.8201 | 0.8153 |
| 0.2962 | 10.0 | 1220 | 0.3382 | 0.8521 | 0.8220 | 0.8204 | 0.8212 |
| 0.2721 | 11.0 | 1342 | 0.3490 | 0.8496 | 0.8162 | 0.8311 | 0.8229 |
| 0.2693 | 12.0 | 1464 | 0.3502 | 0.8546 | 0.8220 | 0.8372 | 0.8288 |
| 0.2745 | 13.0 | 1586 | 0.3284 | 0.8571 | 0.8289 | 0.8239 | 0.8264 |
| 0.2712 | 14.0 | 1708 | 0.3297 | 0.8596 | 0.8299 | 0.8332 | 0.8315 |
| 0.256 | 15.0 | 1830 | 0.3357 | 0.8647 | 0.8346 | 0.8442 | 0.8391 |
| 0.2504 | 16.0 | 1952 | 0.3346 | 0.8571 | 0.8255 | 0.8364 | 0.8306 |
| 0.2487 | 17.0 | 2074 | 0.3242 | 0.8571 | 0.8281 | 0.8264 | 0.8272 |
| 0.2514 | 18.0 | 2196 | 0.3309 | 0.8622 | 0.8314 | 0.8425 | 0.8365 |
| 0.2451 | 19.0 | 2318 | 0.3243 | 0.8622 | 0.8333 | 0.8350 | 0.8341 |
| 0.2461 | 20.0 | 2440 | 0.3260 | 0.8622 | 0.8319 | 0.8400 | 0.8357 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
DownwardSpiral33/gpt2-imdb-pos-v2-001 | DownwardSpiral33 | 2024-05-18T02:28:40Z | 158 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-18T02:28:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Minbyul/selfbiorag-7b-wo-kqa_golden-iter-dpo-step4-filtered | Minbyul | 2024-05-18T02:28:04Z | 9 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:Minbyul/selfbiorag-7b-wo-kqa_golden-iter-dpo-step3-filtered",
"base_model:finetune:Minbyul/selfbiorag-7b-wo-kqa_golden-iter-dpo-step3-filtered",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-18T01:39:15Z | ---
base_model: Minbyul/selfbiorag-7b-wo-kqa_golden-iter-dpo-step3-filtered
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- HuggingFaceH4/ultrafeedback_binarized
model-index:
- name: selfbiorag-7b-wo-kqa_golden-iter-dpo-step4-filtered
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# selfbiorag-7b-wo-kqa_golden-iter-dpo-step4-filtered
This model is a fine-tuned version of [Minbyul/selfbiorag-7b-wo-kqa_golden-iter-dpo-step3-filtered](https://huggingface.co/Minbyul/selfbiorag-7b-wo-kqa_golden-iter-dpo-step3-filtered) on the HuggingFaceH4/ultrafeedback_binarized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6766
- Rewards/chosen: -0.0828
- Rewards/rejected: -0.1144
- Rewards/accuracies: 0.6319
- Rewards/margins: 0.0316
- Logps/rejected: -98.9601
- Logps/chosen: -79.1920
- Logits/rejected: -1.2073
- Logits/chosen: -1.1930
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2
|
evjohnson/ADR_Model | evjohnson | 2024-05-18T02:08:31Z | 104 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-16T01:44:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
apwic/sentiment-lora-r8a1d0.15-0 | apwic | 2024-05-18T02:00:48Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"id",
"base_model:indolem/indobert-base-uncased",
"base_model:finetune:indolem/indobert-base-uncased",
"license:mit",
"region:us"
] | null | 2024-05-18T01:27:35Z | ---
language:
- id
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: sentiment-lora-r8a1d0.15-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-lora-r8a1d0.15-0
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3217
- Accuracy: 0.8622
- Precision: 0.8326
- Recall: 0.8375
- F1: 0.8349
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 30
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.5593 | 1.0 | 122 | 0.5026 | 0.7268 | 0.6658 | 0.6542 | 0.6589 |
| 0.4995 | 2.0 | 244 | 0.4797 | 0.7544 | 0.7149 | 0.7412 | 0.7226 |
| 0.4612 | 3.0 | 366 | 0.4282 | 0.7644 | 0.7199 | 0.7358 | 0.7262 |
| 0.4019 | 4.0 | 488 | 0.3934 | 0.8296 | 0.7949 | 0.7919 | 0.7934 |
| 0.3665 | 5.0 | 610 | 0.4234 | 0.7970 | 0.7618 | 0.7964 | 0.7720 |
| 0.334 | 6.0 | 732 | 0.3723 | 0.8195 | 0.7817 | 0.7973 | 0.7884 |
| 0.3263 | 7.0 | 854 | 0.3704 | 0.8346 | 0.7990 | 0.8230 | 0.8086 |
| 0.3076 | 8.0 | 976 | 0.3521 | 0.8471 | 0.8153 | 0.8168 | 0.8160 |
| 0.298 | 9.0 | 1098 | 0.3522 | 0.8471 | 0.8138 | 0.8243 | 0.8187 |
| 0.2923 | 10.0 | 1220 | 0.3375 | 0.8571 | 0.8289 | 0.8239 | 0.8264 |
| 0.2689 | 11.0 | 1342 | 0.3392 | 0.8622 | 0.8319 | 0.8400 | 0.8357 |
| 0.2686 | 12.0 | 1464 | 0.3484 | 0.8622 | 0.8309 | 0.8450 | 0.8373 |
| 0.2726 | 13.0 | 1586 | 0.3258 | 0.8596 | 0.8316 | 0.8282 | 0.8298 |
| 0.2713 | 14.0 | 1708 | 0.3246 | 0.8622 | 0.8333 | 0.8350 | 0.8341 |
| 0.2577 | 15.0 | 1830 | 0.3307 | 0.8596 | 0.8293 | 0.8357 | 0.8324 |
| 0.2519 | 16.0 | 1952 | 0.3305 | 0.8622 | 0.8314 | 0.8425 | 0.8365 |
| 0.2488 | 17.0 | 2074 | 0.3234 | 0.8546 | 0.8246 | 0.8246 | 0.8246 |
| 0.2546 | 18.0 | 2196 | 0.3247 | 0.8647 | 0.8346 | 0.8442 | 0.8391 |
| 0.2463 | 19.0 | 2318 | 0.3204 | 0.8596 | 0.8307 | 0.8307 | 0.8307 |
| 0.2458 | 20.0 | 2440 | 0.3217 | 0.8622 | 0.8326 | 0.8375 | 0.8349 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
multitensor/inf2_dir | multitensor | 2024-05-18T01:56:35Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2024-05-17T23:26:10Z | # Mistral on AWS Inf2 with FastAPI
Use FastAPI to quickly host serving of Mistral model on AWS Inferentia2 instance Inf2 🚀
Support Multimodal input type (input_embeds) 🖼️

## Environment Setup
Follow the instructions in Neuron docs [Pytorch Neuron Setup](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/torch/torch-setup.html) for basic environment setup.
## Install Packages
Go to the virtual env and install the extra packages.
```
cd app
pip install -r requirements.txt
```
## Run the App
```
uvicorn main:app --host 0.0.0.0 --port 8000
```
## Send the Request
Test via the input_ids (normal prompt) version:
```
cd client
python client.py
```
Test via the input_embeds (common multimodal input, skip embedding layer) version:
```
cd client
python embeds_client.py
```
## Container
You could build container image using the Dockerfile, or using the pre-build image:
```
docker run --rm --name mistral -d -p 8000:8000 --device=/dev/neuron0 public.ecr.aws/shtian/fastapi-mistral
```
|
wcyat/whisper-medium-yue-mdcc | wcyat | 2024-05-18T01:34:54Z | 93 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-17T16:42:36Z | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: openai/whisper-medium
model-index:
- name: whisper-medium-yue-mdcc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-medium-yue-mdcc
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 160
- eval_batch_size: 160
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
|
phuong123/translate_med_term_ev_final | phuong123 | 2024-05-18T01:28:56Z | 62 | 0 | transformers | [
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:phuong123/translate_med_term_ev_final",
"base_model:finetune:phuong123/translate_med_term_ev_final",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-17T01:48:23Z | ---
license: openrail
base_model: phuong123/translate_med_term_ev_final
tags:
- generated_from_keras_callback
model-index:
- name: translate_med_term_ev_final
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# translate_med_term_ev_final
This model is a fine-tuned version of [phuong123/translate_med_term_ev_final](https://huggingface.co/phuong123/translate_med_term_ev_final) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3185
- Validation Loss: 0.3968
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 1886, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.6435 | 0.4200 | 0 |
| 0.3185 | 0.3968 | 1 |
### Framework versions
- Transformers 4.40.2
- TensorFlow 2.15.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
adrake17/Meta-Llama-2-7B-Chat-Amazon | adrake17 | 2024-05-18T01:26:43Z | 1 | 0 | peft | [
"peft",
"safetensors",
"text-generation",
"conversational",
"en",
"dataset:McAuley-Lab/Amazon-Reviews-2023",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"license:mit",
"region:us"
] | text-generation | 2024-05-17T21:16:25Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
license: mit
datasets:
- McAuley-Lab/Amazon-Reviews-2023
language:
- en
metrics:
- rouge
pipeline_tag: text-generation
---
# Model Card for Model ID
This model takes an Amazon review as input, along with its user rating and product categories. It then generates a title for the review.
## Model Details
Example prompt:
\<s\>[INST] \<\<SYS\>\>
Generate the best succinct title for the following product review. Your only output should be the title itself. Do not mention the user rating in the title. Product rating: 1/5 stars. Product categories: 'Automotive, Interior Accessories, Floor Mats & Cargo Liners, Floor Mats'.
\<\</SYS\>\>
These are super flimsy and the mats slip and roll around on the floor, can be pretty dangerous when the slip and fold by the pedals. Avoid buying these. Waste of money. You're better off without any mats than having these. [/INST] "These mats slip, fold, bunch, and roll around your car floor. AVOID." \</s\>
### Framework versions
- PEFT 0.10.1.dev0 |
zhangce/test4 | zhangce | 2024-05-18T01:20:46Z | 155 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-18T01:08:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rifatrzn/llama3-8b | rifatrzn | 2024-05-18T01:20:16Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-18T01:20:16Z | ---
license: apache-2.0
---
|
ahmedesmail16/Train-Augmentation-beit-large | ahmedesmail16 | 2024-05-18T01:08:32Z | 13 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"beit",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/beit-large-patch16-224-pt22k-ft22k",
"base_model:finetune:microsoft/beit-large-patch16-224-pt22k-ft22k",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-17T14:51:17Z | ---
base_model: microsoft/beit-large-patch16-224-pt22k-ft22k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Train-Augmentation-beit-large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Train-Augmentation-beit-large
This model is a fine-tuned version of [microsoft/beit-large-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-large-patch16-224-pt22k-ft22k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8592
- Accuracy: 0.8182
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0993 | 0.99 | 93 | 0.6675 | 0.8340 |
| 0.0492 | 2.0 | 187 | 0.8597 | 0.8379 |
| 0.0134 | 2.99 | 280 | 0.7961 | 0.8024 |
| 0.0016 | 4.0 | 374 | 0.7594 | 0.8340 |
| 0.0004 | 4.97 | 465 | 0.8592 | 0.8182 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.19.1
- Tokenizers 0.15.2
|
tabbas97/distilbert-base-uncased-finetuned-pubmed-torch-trained-tabbas97 | tabbas97 | 2024-05-18T01:06:46Z | 121 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:pubmed-summarization",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-16T20:01:43Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- pubmed-summarization
model-index:
- name: distilbert-base-uncased-finetuned-pubmed-torch-trained-tabbas97
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-pubmed-torch-trained-tabbas97
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the pubmed-summarization dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3843
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
- Pre-finetune Perplexity - 11.65
- Post-finetune Perplexity - 3.99
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Rimyy/mistraftgsm8 | Rimyy | 2024-05-18T01:06:07Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-18T01:01:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jswing/dqn-spaceinvaders2 | jswing | 2024-05-18T01:02:59Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-18T00:21:55Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 258.50 +/- 169.00
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mradermacher/Honyaku-13b-GGUF | mradermacher | 2024-05-18T00:59:45Z | 29 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:aixsatoshi/Honyaku-13b",
"base_model:quantized:aixsatoshi/Honyaku-13b",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-05-17T18:07:39Z | ---
base_model: aixsatoshi/Honyaku-13b
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/aixsatoshi/Honyaku-13b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Honyaku-13b-GGUF/resolve/main/Honyaku-13b.Q2_K.gguf) | Q2_K | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Honyaku-13b-GGUF/resolve/main/Honyaku-13b.IQ3_XS.gguf) | IQ3_XS | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Honyaku-13b-GGUF/resolve/main/Honyaku-13b.IQ3_S.gguf) | IQ3_S | 5.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Honyaku-13b-GGUF/resolve/main/Honyaku-13b.Q3_K_S.gguf) | Q3_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Honyaku-13b-GGUF/resolve/main/Honyaku-13b.IQ3_M.gguf) | IQ3_M | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/Honyaku-13b-GGUF/resolve/main/Honyaku-13b.Q3_K_M.gguf) | Q3_K_M | 6.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Honyaku-13b-GGUF/resolve/main/Honyaku-13b.Q3_K_L.gguf) | Q3_K_L | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/Honyaku-13b-GGUF/resolve/main/Honyaku-13b.IQ4_XS.gguf) | IQ4_XS | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/Honyaku-13b-GGUF/resolve/main/Honyaku-13b.Q4_K_S.gguf) | Q4_K_S | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Honyaku-13b-GGUF/resolve/main/Honyaku-13b.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Honyaku-13b-GGUF/resolve/main/Honyaku-13b.Q5_K_S.gguf) | Q5_K_S | 9.2 | |
| [GGUF](https://huggingface.co/mradermacher/Honyaku-13b-GGUF/resolve/main/Honyaku-13b.Q5_K_M.gguf) | Q5_K_M | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/Honyaku-13b-GGUF/resolve/main/Honyaku-13b.Q6_K.gguf) | Q6_K | 10.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Honyaku-13b-GGUF/resolve/main/Honyaku-13b.Q8_0.gguf) | Q8_0 | 14.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
smallsuper/Meta-Llama-3-8B-4bit-32rank | smallsuper | 2024-05-18T00:57:57Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"feature-extraction",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-17T22:53:00Z | ---
pipeline_tag: feature-extraction
---
## Overview
This is a bare model without any output layer or classification head. It has been quantized to be used for feature extraction tasks.
**Usage**
This model is intended to be used as a base for training on downstream tasks. In order to use it for predictions and inference, it should be fine-tuned on a specific task with an appropriate output layer or classification head added.
**Quantization**
The model has been quantized using the following parameters:
Lora alpha: 16
Lora rank: 32
Lora target modules: all-linear
bits: 4
LoftQ iterations: 5 |
henry-skywalker/mistral_7b_orpo_search_16bit_gguf | henry-skywalker | 2024-05-18T00:56:40Z | 18 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"base_model:quantized:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-17T22:32:10Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit
---
# Uploaded model
- **Developed by:** henry-skywalker
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/KnutJaegersberg_-_webMistral-7B-gguf | RichardErkhov | 2024-05-18T00:55:52Z | 33 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-05-17T22:57:28Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
webMistral-7B - GGUF
- Model creator: https://huggingface.co/KnutJaegersberg/
- Original model: https://huggingface.co/KnutJaegersberg/webMistral-7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [webMistral-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_webMistral-7B-gguf/blob/main/webMistral-7B.Q2_K.gguf) | Q2_K | 2.53GB |
| [webMistral-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_webMistral-7B-gguf/blob/main/webMistral-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [webMistral-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_webMistral-7B-gguf/blob/main/webMistral-7B.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [webMistral-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_webMistral-7B-gguf/blob/main/webMistral-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [webMistral-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_webMistral-7B-gguf/blob/main/webMistral-7B.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [webMistral-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_webMistral-7B-gguf/blob/main/webMistral-7B.Q3_K.gguf) | Q3_K | 3.28GB |
| [webMistral-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_webMistral-7B-gguf/blob/main/webMistral-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [webMistral-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_webMistral-7B-gguf/blob/main/webMistral-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [webMistral-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_webMistral-7B-gguf/blob/main/webMistral-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [webMistral-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_webMistral-7B-gguf/blob/main/webMistral-7B.Q4_0.gguf) | Q4_0 | 3.83GB |
| [webMistral-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_webMistral-7B-gguf/blob/main/webMistral-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [webMistral-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_webMistral-7B-gguf/blob/main/webMistral-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [webMistral-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_webMistral-7B-gguf/blob/main/webMistral-7B.Q4_K.gguf) | Q4_K | 4.07GB |
| [webMistral-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_webMistral-7B-gguf/blob/main/webMistral-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [webMistral-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_webMistral-7B-gguf/blob/main/webMistral-7B.Q4_1.gguf) | Q4_1 | 4.24GB |
| [webMistral-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_webMistral-7B-gguf/blob/main/webMistral-7B.Q5_0.gguf) | Q5_0 | 4.65GB |
| [webMistral-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_webMistral-7B-gguf/blob/main/webMistral-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [webMistral-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_webMistral-7B-gguf/blob/main/webMistral-7B.Q5_K.gguf) | Q5_K | 4.78GB |
| [webMistral-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_webMistral-7B-gguf/blob/main/webMistral-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [webMistral-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_webMistral-7B-gguf/blob/main/webMistral-7B.Q5_1.gguf) | Q5_1 | 5.07GB |
| [webMistral-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_webMistral-7B-gguf/blob/main/webMistral-7B.Q6_K.gguf) | Q6_K | 5.53GB |
| [webMistral-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_webMistral-7B-gguf/blob/main/webMistral-7B.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: cc-by-nc-4.0
---
Prompt Example:
```
## Question: What is slowing down our internet speeds?
### Google Search Results: - Are my other devices slowing down my connection? Gadgets with slower internet technology can slow down speeds for all your other devices. Everyone knows the feeling: you’re hard at work and then suddenly the Internet seems to slow down. Why is that? From background programs to cheap routers, here are a few reasons why your Internet might be a concern. When working outside your home, here’s how to keep your information safe on public Wi-Fi. If your internet slows down only when too many other people are online simultaneously, you’re probably using more bandwidth than your plan allows. Use our internet speed test to see if you’re getting the speed advertised by your ISP. If your results are close to your plan speed, consider upgrading. Generally, your modem or router (or both) will create a speed bottleneck if not working properly—the same goes with wireless gateways. If your equipment is too old, it may not support important internet protocols. Equipment damage, such as bad ports or components, can also cause slowdowns. Is your internet suddenly moving slowly? It could be due to an outdated router or a less-than-ideal router location. Your connection issues may need only an easy fix, like upgrading to a mesh network (which also has to be set up in the right spot) or simply restarting your modem and router. But if you've already attempted many of the tried-and-true methods and your internet speeds are still subpar, the issue might be something your internet service provider is intentionally doing: bandwidth throttling.
### Response: There are several factors that can slow down internet speeds. These include having gadgets with slower internet technology, running background programs[2], using more bandwidth than your plan allows[3], equipment damage[4], an outdated router or a less-than-ideal router location[5], and bandwidth throttling by the internet service provider[5].
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KnutJaegersberg__webMistral-7B)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 47.08 |
| ARC (25-shot) | 59.04 |
| HellaSwag (10-shot) | 80.89 |
| MMLU (5-shot) | 59.0 |
| TruthfulQA (0-shot) | 39.71 |
| Winogrande (5-shot) | 76.32 |
| GSM8K (5-shot) | 8.87 |
| DROP (3-shot) | 5.75 |
|
apwic/sentiment-lora-r8a1d0.05-0 | apwic | 2024-05-18T00:53:47Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"id",
"base_model:indolem/indobert-base-uncased",
"base_model:finetune:indolem/indobert-base-uncased",
"license:mit",
"region:us"
] | null | 2024-05-18T00:20:37Z | ---
language:
- id
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: sentiment-lora-r8a1d0.05-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-lora-r8a1d0.05-0
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3260
- Accuracy: 0.8622
- Precision: 0.8319
- Recall: 0.8400
- F1: 0.8357
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 30
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.5609 | 1.0 | 122 | 0.5086 | 0.7193 | 0.6580 | 0.6514 | 0.6543 |
| 0.4986 | 2.0 | 244 | 0.4855 | 0.7494 | 0.7127 | 0.7427 | 0.7201 |
| 0.4593 | 3.0 | 366 | 0.4238 | 0.7694 | 0.7249 | 0.7394 | 0.7309 |
| 0.3957 | 4.0 | 488 | 0.3916 | 0.8070 | 0.7670 | 0.7735 | 0.7700 |
| 0.3658 | 5.0 | 610 | 0.4266 | 0.7995 | 0.7641 | 0.7981 | 0.7744 |
| 0.3345 | 6.0 | 732 | 0.3666 | 0.8371 | 0.8028 | 0.8072 | 0.8049 |
| 0.3237 | 7.0 | 854 | 0.3714 | 0.8396 | 0.8045 | 0.8265 | 0.8136 |
| 0.304 | 8.0 | 976 | 0.3537 | 0.8421 | 0.8083 | 0.8158 | 0.8119 |
| 0.3027 | 9.0 | 1098 | 0.3531 | 0.8446 | 0.8111 | 0.8201 | 0.8153 |
| 0.2962 | 10.0 | 1220 | 0.3382 | 0.8521 | 0.8220 | 0.8204 | 0.8212 |
| 0.2721 | 11.0 | 1342 | 0.3490 | 0.8496 | 0.8162 | 0.8311 | 0.8229 |
| 0.2693 | 12.0 | 1464 | 0.3502 | 0.8546 | 0.8220 | 0.8372 | 0.8288 |
| 0.2745 | 13.0 | 1586 | 0.3284 | 0.8571 | 0.8289 | 0.8239 | 0.8264 |
| 0.2712 | 14.0 | 1708 | 0.3297 | 0.8596 | 0.8299 | 0.8332 | 0.8315 |
| 0.256 | 15.0 | 1830 | 0.3357 | 0.8647 | 0.8346 | 0.8442 | 0.8391 |
| 0.2504 | 16.0 | 1952 | 0.3346 | 0.8571 | 0.8255 | 0.8364 | 0.8306 |
| 0.2487 | 17.0 | 2074 | 0.3242 | 0.8571 | 0.8281 | 0.8264 | 0.8272 |
| 0.2514 | 18.0 | 2196 | 0.3309 | 0.8622 | 0.8314 | 0.8425 | 0.8365 |
| 0.2451 | 19.0 | 2318 | 0.3243 | 0.8622 | 0.8333 | 0.8350 | 0.8341 |
| 0.2461 | 20.0 | 2440 | 0.3260 | 0.8622 | 0.8319 | 0.8400 | 0.8357 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
EdgarDev/instruct-aira-dataset-finetuned-imdb | EdgarDev | 2024-05-18T00:53:07Z | 106 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-18T00:00:24Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: instruct-aira-dataset-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# instruct-aira-dataset-finetuned-imdb
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0390
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3832 | 1.0 | 366 | 1.1658 |
| 1.1968 | 2.0 | 732 | 1.0905 |
| 1.1464 | 3.0 | 1098 | 1.0390 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
benredmond/llama-3-kto-16bit | benredmond | 2024-05-18T00:51:09Z | 14 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/llama-3-8b-Instruct",
"base_model:finetune:unsloth/llama-3-8b-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-18T00:48:27Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-Instruct
---
# Uploaded model
- **Developed by:** benredmond
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Subsets and Splits