modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-26 12:28:48
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 498
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-26 12:28:16
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
hsikchi/pythia-6.9b-goldrm_tldr-dpo-beta-0.1-alpha-0-step-19968 | hsikchi | 2024-05-18T18:08:18Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-18T18:03:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ranystephan/NeuralFinGPT-v1-10 | ranystephan | 2024-05-18T18:08:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-18T18:07:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/alnrg2arg_-_blockchainlabs_7B_merged_test2_4_prune-8bits | RichardErkhov | 2024-05-18T18:07:14Z | 78 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-18T17:56:59Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
blockchainlabs_7B_merged_test2_4_prune - bnb 8bits
- Model creator: https://huggingface.co/alnrg2arg/
- Original model: https://huggingface.co/alnrg2arg/blockchainlabs_7B_merged_test2_4_prune/
Original model description:
---
license: cc-by-nc-4.0
tags:
- merge
- mergekit
- lazymergekit
- pruning
- alnrg2arg/blockchainlabs_7B_merged_test2_4
- mlabonne/NeuralBeagle14-7B
- udkai/Turdus
---
# blockchainlabs_7B_merged_test2_4_prune
blockchainlabs_7B_merged_test2_4_prune is a pruned model based on alnrg2arg/blockchainlabs_7B_merged_test2_4, which is a merged model using
following models using [mergekit](https://github.com/cg123/mergekit):
* [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
* [udkai/Turdus](https://huggingface.co/udkai/Turdus)
Pruning Kit I used: [wanda](https://github.com/locuslab/wanda?tab=readme-ov-file#ablation-on-obs-weight-update)
## π§© Configuration
```json
{
"_name_or_path": "alnrg2arg/blockchainlabs_7B_merged_test2_4_prun",
"architectures": [
"MistralForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 2,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 32768,
"model_type": "mistral",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"rms_norm_eps": 1e-05,
"rope_theta": 10000.0,
"sliding_window": 4096,
"tie_word_embeddings": false,
"torch_dtype": "float16",
"transformers_version": "4.36.2",
"use_cache": false,
"vocab_size": 32000
}
```
|
mbhargav/zephyr-support-chatbot | mbhargav | 2024-05-18T18:07:13Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TheBloke/zephyr-7B-alpha-GPTQ",
"base_model:adapter:TheBloke/zephyr-7B-alpha-GPTQ",
"license:mit",
"region:us"
] | null | 2024-05-18T17:30:54Z | ---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TheBloke/zephyr-7B-alpha-GPTQ
model-index:
- name: zephyr-support-chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-support-chatbot
This model is a fine-tuned version of [TheBloke/zephyr-7B-alpha-GPTQ](https://huggingface.co/TheBloke/zephyr-7B-alpha-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
liminerity/mm4.ascii.star.gguf | liminerity | 2024-05-18T18:05:16Z | 6 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:liminerity/mm4.star",
"base_model:quantized:liminerity/mm4.star",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-18T18:02:31Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: liminerity/mm4.star
---
# Uploaded model
- **Developed by:** liminerity
- **License:** apache-2.0
- **Finetuned from model :** liminerity/mm4.star
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Jubliano/wav2vec2-large-xls-r-300m-ipa-INTERNATIONAL1.3 | Jubliano | 2024-05-18T18:03:32Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-17T19:08:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
liminerity/mm4.ascii.star | liminerity | 2024-05-18T18:03:26Z | 130 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"dataset:gate369/alpaca-star-ascii",
"dataset:gate369/Alpaca-Star",
"base_model:liminerity/mm4.star",
"base_model:finetune:liminerity/mm4.star",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-18T17:56:27Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: liminerity/mm4.star
datasets:
- gate369/alpaca-star-ascii
- gate369/Alpaca-Star
---

# Uploaded model
- **Developed by:** liminerity
- **License:** apache-2.0
- **Finetuned from model :** liminerity/mm4.star
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
hsikchi/pythia-6.9b-goldrm_tldr-dpo-beta-0.1-alpha-0-LATEST | hsikchi | 2024-05-18T18:02:10Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-18T17:57:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hsikchi/pythia-6.9b-goldrm_tldr-dpo-beta-0.5-alpha-0-step-79872 | hsikchi | 2024-05-18T18:02:02Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-18T17:57:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RayanNan/dementia | RayanNan | 2024-05-18T18:00:34Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-06T11:52:23Z | ---
library_name: transformers
tags:
- llama-factory
---
# LLM model(AI assistant)for the Dementia
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a Model basing on the LLama2-7B. The model is about the Dementia detection.
-
- **Model type:** Llama-2-7B
- **Language(s) (NLP):** English
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
OsherElhadad/sac-her-PandaReachJointsSparse-v3-250000-future | OsherElhadad | 2024-05-18T18:00:14Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachJointsSparse-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-18T17:55:58Z | ---
library_name: stable-baselines3
tags:
- PandaReachJointsSparse-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: sac-her-
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachJointsSparse-v3
type: PandaReachJointsSparse-v3
metrics:
- type: mean_reward
value: -2.00 +/- 1.18
name: mean_reward
verified: false
---
# **sac-her-** Agent playing **PandaReachJointsSparse-v3**
This is a trained model of a **sac-her-** agent playing **PandaReachJointsSparse-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
afrideva/phi-3-portuguese-tom-cat-4k-instruct-GGUF | afrideva | 2024-05-18T17:56:42Z | 17 | 1 | transformers | [
"transformers",
"gguf",
"portugues",
"portuguese",
"QA",
"instruct",
"phi",
"ggml",
"quantized",
"text-generation",
"pt",
"dataset:rhaymison/superset",
"base_model:rhaymison/phi-3-portuguese-tom-cat-4k-instruct",
"base_model:quantized:rhaymison/phi-3-portuguese-tom-cat-4k-instruct",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-05-18T17:31:48Z | ---
base_model: rhaymison/phi-3-portuguese-tom-cat-4k-instruct
datasets:
- rhaymison/superset
inference: true
language:
- pt
library_name: transformers
license: apache-2.0
model-index:
- name: phi-3-portuguese-tom-cat-4k-instruct
results:
- dataset:
args:
num_few_shot: 3
name: ENEM Challenge (No Images)
split: train
type: eduagarcia/enem_challenge
metrics:
- name: accuracy
type: acc
value: 61.58
source:
name: Open Portuguese LLM Leaderboard
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/phi-3-portuguese-tom-cat-4k-instruct
task:
name: Text Generation
type: text-generation
- dataset:
args:
num_few_shot: 3
name: BLUEX (No Images)
split: train
type: eduagarcia-temp/BLUEX_without_images
metrics:
- name: accuracy
type: acc
value: 50.63
source:
name: Open Portuguese LLM Leaderboard
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/phi-3-portuguese-tom-cat-4k-instruct
task:
name: Text Generation
type: text-generation
- dataset:
args:
num_few_shot: 3
name: OAB Exams
split: train
type: eduagarcia/oab_exams
metrics:
- name: accuracy
type: acc
value: 43.69
source:
name: Open Portuguese LLM Leaderboard
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/phi-3-portuguese-tom-cat-4k-instruct
task:
name: Text Generation
type: text-generation
- dataset:
args:
num_few_shot: 15
name: Assin2 RTE
split: test
type: assin2
metrics:
- name: f1-macro
type: f1_macro
value: 91.54
source:
name: Open Portuguese LLM Leaderboard
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/phi-3-portuguese-tom-cat-4k-instruct
task:
name: Text Generation
type: text-generation
- dataset:
args:
num_few_shot: 15
name: Assin2 STS
split: test
type: eduagarcia/portuguese_benchmark
metrics:
- name: pearson
type: pearson
value: 75.27
source:
name: Open Portuguese LLM Leaderboard
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/phi-3-portuguese-tom-cat-4k-instruct
task:
name: Text Generation
type: text-generation
- dataset:
args:
num_few_shot: 15
name: FaQuAD NLI
split: test
type: ruanchaves/faquad-nli
metrics:
- name: f1-macro
type: f1_macro
value: 47.46
source:
name: Open Portuguese LLM Leaderboard
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/phi-3-portuguese-tom-cat-4k-instruct
task:
name: Text Generation
type: text-generation
- dataset:
args:
num_few_shot: 25
name: HateBR Binary
split: test
type: ruanchaves/hatebr
metrics:
- name: f1-macro
type: f1_macro
value: 83.01
source:
name: Open Portuguese LLM Leaderboard
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/phi-3-portuguese-tom-cat-4k-instruct
task:
name: Text Generation
type: text-generation
- dataset:
args:
num_few_shot: 25
name: PT Hate Speech Binary
split: test
type: hate_speech_portuguese
metrics:
- name: f1-macro
type: f1_macro
value: 70.19
source:
name: Open Portuguese LLM Leaderboard
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/phi-3-portuguese-tom-cat-4k-instruct
task:
name: Text Generation
type: text-generation
- dataset:
args:
num_few_shot: 25
name: tweetSentBR
split: test
type: eduagarcia/tweetsentbr_fewshot
metrics:
- name: f1-macro
type: f1_macro
value: 57.78
source:
name: Open Portuguese LLM Leaderboard
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/phi-3-portuguese-tom-cat-4k-instruct
task:
name: Text Generation
type: text-generation
model_creator: rhaymison
model_name: phi-3-portuguese-tom-cat-4k-instruct
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- portugues
- portuguese
- QA
- instruct
- phi
- gguf
- ggml
- quantized
---
# phi-3-portuguese-tom-cat-4k-instruct-GGUF
Quantized GGUF model files for [phi-3-portuguese-tom-cat-4k-instruct](https://huggingface.co/rhaymison/phi-3-portuguese-tom-cat-4k-instruct) from [rhaymison](https://huggingface.co/rhaymison)
## Original Model Card:
# Phi-3-portuguese-tom-cat-4k-instruct
<p align="center">
<img src="https://raw.githubusercontent.com/rhaymisonbetini/huggphotos/main/tom-cat.webp" width="50%" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
</p>
This model was trained with a superset of 300,000 instructions in Portuguese.
The model comes to help fill the gap in models in Portuguese. Tuned from the microsoft/Phi-3-mini-4k.
# How to use
### FULL MODEL : A100
### HALF MODEL: L4
### 8bit or 4bit : T4 or V100
You can use the model in its normal form up to 4-bit quantization. Below we will use both approaches.
Remember that verbs are important in your prompt. Tell your model how to act or behave so that you can guide them along the path of their response.
Important points like these help models (even smaller models like 4b) to perform much better.
```python
!pip install -q -U transformers
!pip install -q -U accelerate
!pip install -q -U bitsandbytes
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model = AutoModelForCausalLM.from_pretrained("rhaymison/phi-3-portuguese-tom-cat-4k-instruct", device_map= {"": 0})
tokenizer = AutoTokenizer.from_pretrained("rhaymison/phi-3-portuguese-tom-cat-4k-instruct")
model.eval()
```
You can use with Pipeline.
```python
from transformers import pipeline
pipe = pipeline("text-generation",
model=model,
tokenizer=tokenizer,
do_sample=True,
max_new_tokens=512,
num_beams=2,
temperature=0.3,
top_k=50,
top_p=0.95,
early_stopping=True,
pad_token_id=tokenizer.eos_token_id,
)
def format_template(question:str):
system_prompt = "Abaixo estΓ‘ uma instruΓ§Γ£o que descreve uma tarefa, juntamente com uma entrada que fornece mais contexto. Escreva uma resposta que complete adequadamente o pedido."
return f"""<s><|system|>
{ system_prompt }
<|user|>
{ question }
<|assistant|>
"""
question = format_template("E possivel ir de Carro dos Estados unidos ate o japΓ£o")
pipe(question)
```
If you are having a memory problem such as "CUDA Out of memory", you should use 4-bit or 8-bit quantization.
For the complete model in colab you will need the A100.
If you want to use 4bits or 8bits, T4 or L4 will already solve the problem.
# 4bits example
```python
from transformers import BitsAndBytesConfig
import torch
nb_4bit_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True
)
model = AutoModelForCausalLM.from_pretrained(
base_model,
quantization_config=bnb_config,
device_map={"": 0}
)
```
# Open Portuguese LLM Leaderboard Evaluation Results
Detailed results can be found [here](https://huggingface.co/datasets/eduagarcia-temp/llm_pt_leaderboard_raw_results/tree/main/rhaymison/phi-3-portuguese-tom-cat-4k-instruct) and on the [π Open Portuguese LLM Leaderboard](https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard)
| Metric | Value |
|--------------------------|---------|
|Average |**64.57**|
|ENEM Challenge (No Images)| 61.58|
|BLUEX (No Images) | 50.63|
|OAB Exams | 43.69|
|Assin2 RTE | 91.54|
|Assin2 STS | 75.27|
|FaQuAD NLI | 47.46|
|HateBR Binary | 83.01|
|PT Hate Speech Binary | 70.19|
|tweetSentBR | 57.78|
### Comments
Any idea, help or report will always be welcome.
email: [email protected]
<div style="display:flex; flex-direction:row; justify-content:left">
<a href="https://www.linkedin.com/in/rhaymison-cristian-betini-2b3016175/" target="_blank">
<img src="https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white">
</a>
<a href="https://github.com/rhaymisonbetini" target="_blank">
<img src="https://img.shields.io/badge/GitHub-100000?style=for-the-badge&logo=github&logoColor=white">
</a> |
RichardErkhov/alnrg2arg_-_blockchainlabs_7B_merged_test2_4_prune-4bits | RichardErkhov | 2024-05-18T17:56:25Z | 78 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-18T17:50:18Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
blockchainlabs_7B_merged_test2_4_prune - bnb 4bits
- Model creator: https://huggingface.co/alnrg2arg/
- Original model: https://huggingface.co/alnrg2arg/blockchainlabs_7B_merged_test2_4_prune/
Original model description:
---
license: cc-by-nc-4.0
tags:
- merge
- mergekit
- lazymergekit
- pruning
- alnrg2arg/blockchainlabs_7B_merged_test2_4
- mlabonne/NeuralBeagle14-7B
- udkai/Turdus
---
# blockchainlabs_7B_merged_test2_4_prune
blockchainlabs_7B_merged_test2_4_prune is a pruned model based on alnrg2arg/blockchainlabs_7B_merged_test2_4, which is a merged model using
following models using [mergekit](https://github.com/cg123/mergekit):
* [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
* [udkai/Turdus](https://huggingface.co/udkai/Turdus)
Pruning Kit I used: [wanda](https://github.com/locuslab/wanda?tab=readme-ov-file#ablation-on-obs-weight-update)
## π§© Configuration
```json
{
"_name_or_path": "alnrg2arg/blockchainlabs_7B_merged_test2_4_prun",
"architectures": [
"MistralForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 2,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 32768,
"model_type": "mistral",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"rms_norm_eps": 1e-05,
"rope_theta": 10000.0,
"sliding_window": 4096,
"tie_word_embeddings": false,
"torch_dtype": "float16",
"transformers_version": "4.36.2",
"use_cache": false,
"vocab_size": 32000
}
```
|
katk31/q-Taxi-v3-test | katk31 | 2024-05-18T17:56:12Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-18T17:56:07Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-test
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="katk31/q-Taxi-v3-test", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Beniuv/rl_course_vizdoom_health_gathering_supreme | Beniuv | 2024-05-18T17:54:46Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-18T17:54:35Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.14 +/- 5.50
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r Beniuv/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
afrideva/Phi-3-mini-4k-instruct-function-calling-assistant-spanish-pofi-v2-GGUF | afrideva | 2024-05-18T17:52:22Z | 10 | 2 | null | [
"gguf",
"nlp",
"code",
"phi-3",
"chat",
"function-call",
"ggml",
"quantized",
"text-generation",
"es",
"dataset:Bluckr/function-calling-assistant-spanish-pofi-v2",
"base_model:Bluckr/Phi-3-mini-4k-instruct-function-calling-assistant-spanish-pofi-v2",
"base_model:quantized:Bluckr/Phi-3-mini-4k-instruct-function-calling-assistant-spanish-pofi-v2",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-05-18T17:34:04Z | ---
base_model: Bluckr/Phi-3-mini-4k-instruct-function-calling-assistant-spanish-pofi-v2
datasets:
- Bluckr/function-calling-assistant-spanish-pofi-v2
inference: true
language:
- es
license: mit
model_creator: Bluckr
model_name: Phi-3-mini-4k-instruct-function-calling-assistant-spanish-pofi-v2
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- nlp
- code
- phi-3
- chat
- function-call
- gguf
- ggml
- quantized
widget:
- messages:
- content: '### Input: Que sabes hacer? ### Response:'
role: user
---
# Phi-3-mini-4k-instruct-function-calling-assistant-spanish-pofi-v2-GGUF
Quantized GGUF model files for [Phi-3-mini-4k-instruct-function-calling-assistant-spanish-pofi-v2](https://huggingface.co/Bluckr/Phi-3-mini-4k-instruct-function-calling-assistant-spanish-pofi-v2) from [Bluckr](https://huggingface.co/Bluckr)
## Original Model Card:
<div style="text-align: center;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64beeb8f4b4ff0d5097ddcfc/HF124f84-X7L_rPynRa4n.gif" alt="Pofi" width="300" style="display: block; margin: 0 auto;" />
</div>
Phi 3 adjusted to behave like assistant Pofi, training data works with the function calling method.
is a fine-tuned version of ["unsloth/Phi-3-mini-4k-instruct"](https://huggingface.co/unsloth/Phi-3-mini-4k-instruct)
Pofi can:
| Utilities |
|-----------------------------|
| Setting alarms |
| Connecting to the web |
| Sending files |
| Sending messages |
| Saving strings of characters|
| Opening applications |
| Creating files |
| Manipulating the system |
## Simple Inference API
```python
import requests
API_URL = "https://api-inference.huggingface.co/models/Bluckr/Phi-3-mini-4k-instruct-function-calling-assistant-spanish-pofi-v2"
headers = {"Authorization": "Bearer %s"%token_id}
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
print(response.json())
prompt = """### Input: cΓ³mo te llamas? ### Response:"""
output = query({
"inputs": prompt
})
```
# Response
```python
[{'generated_text': '### Input: cΓ³mo te llamas? ### Response: soy Pofi.'}]
```
## Unsloth Inference
```python
%%capture
# Installs Unsloth, Xformers (Flash Attention) and all other packages!
!pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
!pip install --no-deps "xformers<0.0.26" trl peft accelerate bitsandbytes
```
```python
alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{}
### Input:
{}
### Response:
{}"""
```
```python
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "Bluckr/Phi-3-mini-4k-instruct-function-calling-assistant-spanish-pofi-v2",
max_seq_length = 2048,
dtype = None,
load_in_4bit = True,
)
FastLanguageModel.for_inference(model)
```
```python
inputs = tokenizer(
[
alpaca_prompt.format(
"""""functions":[{'name': 'fnt_programa', 'description': 'el usuario solicita un programa.', 'parameters': [{'description': 'nombre del programa solicitado.', 'name': 'programa', 'required': True, 'type': 'string'}]},
{'name': 'fnt_buscar_web', 'description': 'el usuario solicita una busqueda en internet.', 'parameters': [{'description': 'busqueda especifica.', 'name': 'busqueda', 'required': False, 'type': 'string'}, {'description': 'pΓ‘gina especifica para la busqueda', 'name': 'sitio', 'required': False, 'type': 'string'}]},
{'name': 'fnt_buscar_lugares', 'description': 'el usuario solicita la ubicaciΓ³n de un lugar.', 'parameters': [{'description': 'lugar especifico.', 'name': 'lugar', 'required': True, 'type': 'string'}, {'description': 'ubicaciΓ³n del lugar', 'name': 'ubicaciΓ³n', 'required': False, 'type': 'string'}]},
{'name': 'fnt_enviar_mensajes', 'description': 'el usuario desea enviar un mensaje.', 'parameters': [{'description': 'el usuario especifica a quien enviar el mensaje.', 'name': 'destinatario', 'required': True, 'type': 'string'}, {'description': 'contenido que desea enviar el usuario', 'name': 'mensaje', 'required': True, 'type': 'string'}]},
{'name': 'fnt_crear_archivo', 'description': 'el usuario desea crear un archivo.', 'parameters': [{'description': 'el usuario especifica el nombre del archivo.', 'name': 'nombre', 'required': False, 'type': 'string'}, {'description': 'ubicaciΓ³n donde se crearΓ‘ el archivo', 'name': 'ubicaciΓ³n', 'required': False, 'type': 'string'}, {'description': 'extensiΓ³n del archivo', 'name': 'extensiΓ³n', 'required': False, 'type': 'string'}]},
{'name': 'fnt_establecer_alarma', 'description': 'el usuario desea una alarma o recordatorio', 'parameters': [{'description': 'el usuario especifica el nombre de la alarma.', 'name': 'nombre', 'required': False, 'type': 'string'}, {'description': 'hora de la alarma', 'name': 'hora', 'required': True, 'type': 'string'}, {'description': 'dΓa que se activarΓ‘ la alarma', 'name': 'dΓa', 'required': False, 'type': 'string'}]},
{'name': 'fnt_enviar_archivos', 'description': 'el usuario solicita el envio de archivos.', 'parameters': [{'description': 'archivos especificos.', 'name': 'archivos', 'required': True, 'type': 'string'}, {'description': 'destino donde llegarΓ‘n los archivos', 'name': 'destino', 'required': True, 'type': 'string'}]},
{'name': 'fnt_guardar_valores', 'description': 'el usuario solicita almacenar valores.', 'parameters': [{'description': 'valor a almacenar.', 'name': 'valor', 'required': True, 'type': 'string'}, {'description': 'lugar de almacenamiento', 'name': 'lugar', 'required': False, 'type': 'string'}]},
{'name': 'fnt_hora', 'description': 'el usuario solicita la hora', 'parameters': [{'description': 'ubicaciΓ³n donde la hora es solicitada.', 'name': 'ubicacion', 'required': True, 'type': 'string'}]},
{'name': 'fnt_clima', 'description': 'el usuario solicita el clima', 'parameters': [{'description': 'ubicaciΓ³n donde se solicita el clima.', 'name': 'ubicacion', 'required': True, 'type': 'string'}]},
{'name': 'fnt_significado', 'description': 'el usuario solicita el significado de una palabra', 'parameters': [{'description': 'palabra solicitada.', 'name': 'palabra', 'required': True, 'type': 'string'}]},""", # instruction
"Pofi envia el archivo de selfie.jpg a drive", # input
"", # output - leave this blank for generation!
)
], return_tensors = "pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens = 64, use_cache = True)
tokenizer.batch_decode(outputs)
```
# Response
```python
Response:\nEnviando el archivo de selfie.jpg a drive.{"function_call":{"name":"fnt_enviar_archivos","arguments":{"archivos":"selfie.jpg","destino":"drive"}}}<|endoftext|>']
``` |
shapiron/q-FrozenLake-v1-4x4-noSlippery | shapiron | 2024-05-18T17:47:24Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-16T03:11:39Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="shapiron/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
VinyVan/model8 | VinyVan | 2024-05-18T17:44:39Z | 8 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-18T17:40:16Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** VinyVan
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
cesaenv/rottenTomatoes | cesaenv | 2024-05-18T17:33:37Z | 0 | 0 | fastai | [
"fastai",
"region:us"
] | null | 2024-05-18T17:33:28Z | ---
tags:
- fastai
---
# Amazing!
π₯³ Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using π€ Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner π€! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
AliSaadatV/virus_pythia_70_1024_2d_representation_GaussianPlusCE | AliSaadatV | 2024-05-18T17:33:25Z | 129 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m",
"base_model:finetune:EleutherAI/pythia-70m",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-18T17:33:20Z | ---
license: apache-2.0
base_model: EleutherAI/pythia-70m
tags:
- generated_from_trainer
model-index:
- name: virus_pythia_70_1024_2d_representation_GaussianPlusCE
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# virus_pythia_70_1024_2d_representation_GaussianPlusCE
This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
PQlet/lora-narutoblip-v1-ablation-r16-a16-module_to_k_to_v | PQlet | 2024-05-18T17:32:00Z | 2 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-05-18T17:31:55Z | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
base_model: runwayml/stable-diffusion-v1-5
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - PQlet/lora-narutoblip-v1-ablation-r16-a16-module_to_k_to_v
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the Naruto-BLIP dataset. You can find some example images in the following.







## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
Minaaaa/distilbert_qa_v2 | Minaaaa | 2024-05-18T17:27:33Z | 165 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"question-answering",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-05-18T17:27:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
antitheft159/bidaochain.195 | antitheft159 | 2024-05-18T17:26:06Z | 0 | 0 | null | [
"license:cc-by-nd-4.0",
"region:us"
] | null | 2024-05-18T17:25:23Z | ---
license: cc-by-nd-4.0
---
|
mabakaik/clasificadorTexto | mabakaik | 2024-05-18T17:23:05Z | 0 | 0 | fastai | [
"fastai",
"region:us"
] | null | 2024-05-18T13:50:10Z | ---
tags:
- fastai
---
# Amazing!
π₯³ Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using π€ Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner π€! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
Bachhoang/peft-vbd-alpha-32 | Bachhoang | 2024-05-18T17:17:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-14T07:37:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Bachhoang/peft-vbd-alpha-32-checkpoint | Bachhoang | 2024-05-18T17:17:00Z | 3 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:Bachhoang/vbd-llama2-7B-legals-chat",
"base_model:adapter:Bachhoang/vbd-llama2-7B-legals-chat",
"region:us"
] | null | 2024-05-14T07:37:28Z | ---
base_model: Bachhoang/vbd-llama2-7B-legals-chat
tags:
- generated_from_trainer
model-index:
- name: peft-vbd-alpha-32-checkpoint
results: []
library_name: peft
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# peft-vbd-alpha-32-checkpoint
This model is a fine-tuned version of [Bachhoang/vbd-llama2-7B-legals-chat](https://huggingface.co/Bachhoang/vbd-llama2-7B-legals-chat) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- _load_in_8bit: False
- _load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
- load_in_4bit: True
- load_in_8bit: False
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 2
- num_epochs: 3
- mixed_precision_training: Native AMP
### Framework versions
- PEFT 0.5.0
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.16.0
- Tokenizers 0.15.2
|
emilykang/Phi_medmcqa_question_generation-medicine_lora | emilykang | 2024-05-18T17:16:36Z | 0 | 0 | peft | [
"peft",
"safetensors",
"phi",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-05-18T15:13:10Z | ---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: microsoft/phi-2
datasets:
- generator
model-index:
- name: Phi_medmcqa_question_generation-medicine_lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Phi_medmcqa_question_generation-medicine_lora
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 10
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.1+cu118
- Datasets 2.19.0
- Tokenizers 0.19.1 |
theglassofwater/mistral_pretraining_1.6ksteps_36batch | theglassofwater | 2024-05-18T17:14:09Z | 184 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-18T17:14:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
KBlueLeaf/llama3-llava-next-8b-gguf | KBlueLeaf | 2024-05-18T17:13:57Z | 408 | 8 | null | [
"gguf",
"en",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-18T17:01:41Z | ---
language:
- en
---
# LLaMA3-LLaVA-NeXT-8B GGUF files
GGUF version of https://huggingface.co/lmms-lab/llama3-llava-next-8b <br>
download the mmproj-model-f16.gguf and any quant you want of llama3-llava-next-8b-*.gguf
Follow the [readme from llama.cpp](https://github.com/ggerganov/llama.cpp/blob/master/examples/llava/README.md)<br>
or the [readme from llama-cpp-python](https://github.com/abetlen/llama-cpp-python?tab=readme-ov-file#multi-modal-models) |
mohcinebd/outputs | mohcinebd | 2024-05-18T17:13:08Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:ybelkada/falcon-7b-sharded-bf16",
"base_model:adapter:ybelkada/falcon-7b-sharded-bf16",
"region:us"
] | null | 2024-05-18T17:12:54Z | ---
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: ybelkada/falcon-7b-sharded-bf16
model-index:
- name: outputs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
Mullerjo/dqn-SpaceInvadersNoFrameskip-v | Mullerjo | 2024-05-18T17:13:04Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-18T17:12:29Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 689.00 +/- 280.95
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Mullerjo -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Mullerjo -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Mullerjo
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
c4n11/multilingual-xlm-roberta-for-ner | c4n11 | 2024-05-18T17:04:30Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-05-17T20:04:58Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: multilingual-xlm-roberta-for-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multilingual-xlm-roberta-for-ner
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1372
- F1: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| No log | 1.0 | 263 | 0.1558 | 1.0 |
| 0.2186 | 2.0 | 526 | 0.1366 | 1.0 |
| 0.2186 | 3.0 | 789 | 0.1372 | 1.0 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
taesiri/output10 | taesiri | 2024-05-18T16:56:53Z | 80 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"paligemma",
"image-text-to-text",
"generated_from_trainer",
"dataset:vq_av2",
"base_model:google/paligemma-3b-pt-224",
"base_model:finetune:google/paligemma-3b-pt-224",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-05-18T01:04:22Z | ---
license: gemma
base_model: google/paligemma-3b-pt-224
tags:
- generated_from_trainer
datasets:
- vq_av2
model-index:
- name: output10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output10
This model is a fine-tuned version of [google/paligemma-3b-pt-224](https://huggingface.co/google/paligemma-3b-pt-224) on the vq_av2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.42.0.dev0
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
muzammil-eds/stable-diffusion-v1.4-floorplans-generator-v1 | muzammil-eds | 2024-05-18T16:56:46Z | 29 | 1 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-05-18T16:55:47Z | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 𧨠diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Dukhanin/dig_break_hack_1705 | Dukhanin | 2024-05-18T16:55:10Z | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"t5",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-05-18T16:36:04Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 301 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 20,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: T5EncoderModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Dense({'in_features': 768, 'out_features': 768, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
(3): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
MrezaPRZ/codegemma_create_context | MrezaPRZ | 2024-05-18T16:54:49Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-18T16:48:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
emendes3/llava_13b_country_synthetic | emendes3 | 2024-05-18T16:49:07Z | 163 | 0 | peft | [
"peft",
"safetensors",
"llava_llama",
"generated_from_trainer",
"base_model:liuhaotian/llava-v1.5-13b",
"base_model:adapter:liuhaotian/llava-v1.5-13b",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-05-13T23:33:42Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: liuhaotian/llava-v1.5-13b
model-index:
- name: llava_13b_country_synthetic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llava_13b_country_synthetic
This model is a fine-tuned version of [liuhaotian/llava-v1.5-13b](https://huggingface.co/liuhaotian/llava-v1.5-13b) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0060
- eval_runtime: 79.5724
- eval_samples_per_second: 12.203
- eval_steps_per_second: 0.39
- epoch: 19.0
- step: 589
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 20.0
### Framework versions
- PEFT 0.10.0
- Transformers 4.37.2
- Pytorch 2.1.2+cu121
- Tokenizers 0.15.1 |
AI-Sweden-Models/tyr | AI-Sweden-Models | 2024-05-18T16:46:51Z | 30 | 3 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"sv",
"base_model:Equall/Saul-7B-Instruct-v1",
"base_model:merge:Equall/Saul-7B-Instruct-v1",
"base_model:timpal0l/Mistral-7B-v0.1-flashback-v2-instruct",
"base_model:merge:timpal0l/Mistral-7B-v0.1-flashback-v2-instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-20T20:19:16Z | ---
language:
- sv
license: apache-2.0
tags:
- merge
base_model:
- timpal0l/Mistral-7B-v0.1-flashback-v2-instruct
- Equall/Saul-Instruct-v1
pipeline_tag: text-generation
---
# Tyr
<img src="tyr_jpeg.jpeg" width="100%" height="auto">
*Tyr var en krigsgud och en son av Odin, men han var frΓ€mst fΓΆrknippad med lag, rΓ€ttvisa och svΓ€r av edar.*
# How to use:
```python
import torch
from transformers import pipeline
device = "cuda" if torch.cuda.is_available() else "cpu"
pipe = pipeline("text-generation", "AI-Sweden-Models/tyr", device=device)
query = "Vad Γ€r grundlagen?"
response = pipe(f"USER:{query}ASSISTANT:", max_length=512, temperature=1.0)
print(response[0]["generated_text"].split("ASSISTANT: ")[1])
```
```python
>>> "Grundlagen Γ€r en samling av grundlΓ€ggande lagar som reglerar den politiska strukturen och funktionen av en nation.
Den innehΓ₯ller de grundlΓ€ggande principer och regler som styr en nations politiska system, inklusive rΓ€ttigheter och skyldigheter av medborgare,
struktur och funktion av regeringen, och relationer mellan olika politiska organ.
Grundlagen kan ocksΓ₯ inkludera de grundlΓ€ggande vΓ€rderingarna och principer som en nation stΓ₯r fΓΆr."
```
```python
query = "Γr det olagligt att kΓΆra fΓΆr fort?"
response = pipe(f"USER:{query}ASSISTANT:", max_length=512, temperature=1.0)
print(response[0]["generated_text"].split("ASSISTANT: ")[1])
```
```python
>>> "Ja, det Γ€r olagligt att kΓΆra fΓΆr fort.
Varje land har sina egna trafikregler och hastighetsbegrΓ€nsningar,
men det Γ€r allmΓ€nt olagligt att ΓΆverstiga de tillΓ₯tna hastighetsgrΓ€nserna.
Om du bryter hastighetsbegrΓ€nsningarna kan du bli dΓΆmd fΓΆr trafikbrott och riskera bΓΆter,
fΓΆrlust av kΓΆrkortet och i vissa fall Γ€ven fΓ€ngelse."
```
This model is a merge of [timpal0l/Mistral-7B-v0.1-flashback-v2-instruct](https://huggingface.co/timpal0l/Mistral-7B-v0.1-flashback-v2-instruct) and [Equall/Saul-Instruct-v1](https://huggingface.co/Equall/Saul-Instruct-v1).
## config.yaml
```yaml
models:
- model: timpal0l/Mistral-7B-v0.1-flashback-v2-instruct
# No parameters necessary for base model
- model: Equall/Saul-Instruct-v1
parameters:
density: 0.53
weight: 0.6
merge_method: dare_ties
base_model: timpal0l/Mistral-7B-v0.1-flashback-v2-instruct
parameters:
int8_mask: true
dtype: bfloat16
``` |
Fabiioki/distilbert-base-uncased-finetuned-ag-news | Fabiioki | 2024-05-18T16:43:55Z | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-18T16:03:50Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-ag-news
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ag-news
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2251
- Accuracy: 0.9339
- F1: 0.9337
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3291 | 1.0 | 900 | 0.2403 | 0.9283 | 0.9281 |
| 0.1933 | 2.0 | 1800 | 0.2251 | 0.9339 | 0.9337 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
bartowski/Hermes-2-Theta-Llama-3-8B-GGUF | bartowski | 2024-05-18T16:40:16Z | 551 | 14 | null | [
"gguf",
"Llama-3",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"synthetic data",
"distillation",
"function calling",
"json mode",
"axolotl",
"merges",
"text-generation",
"en",
"dataset:teknium/OpenHermes-2.5",
"base_model:NousResearch/Hermes-2-Pro-Llama-3-8B",
"base_model:quantized:NousResearch/Hermes-2-Pro-Llama-3-8B",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-05-15T19:42:36Z | ---
base_model: NousResearch/Hermes-2-Pro-Llama-3-8B
tags:
- Llama-3
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- synthetic data
- distillation
- function calling
- json mode
- axolotl
- merges
model-index:
- name: Hermes-2-Pro-Llama-3-Instruct-8B-Merge
results: []
language:
- en
datasets:
- teknium/OpenHermes-2.5
widget:
- example_title: Hermes 2 Pro Llama-3 Instruct Merge
messages:
- role: system
content: >-
You are a sentient, superintelligent artificial general intelligence, here
to teach and assist me.
- role: user
content: >-
Write a short story about Goku discovering kirby has teamed up with Majin
Buu to destroy the world.
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of Hermes-2-Theta-Llama-3-8B
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2854">b2854</a> for quantization.
Original model: https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/b6ac44691e994344625687afe3263b3a)
## Prompt format
```
<|begin_of_text|><|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Hermes-2-Theta-Llama-3-8B-Q8_0.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-8B-GGUF/blob/main/Hermes-2-Theta-Llama-3-8B-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. |
| [Hermes-2-Theta-Llama-3-8B-Q6_K.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-8B-GGUF/blob/main/Hermes-2-Theta-Llama-3-8B-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. |
| [Hermes-2-Theta-Llama-3-8B-Q5_K_M.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-8B-GGUF/blob/main/Hermes-2-Theta-Llama-3-8B-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. |
| [Hermes-2-Theta-Llama-3-8B-Q5_K_S.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-8B-GGUF/blob/main/Hermes-2-Theta-Llama-3-8B-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. |
| [Hermes-2-Theta-Llama-3-8B-Q4_K_M.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-8B-GGUF/blob/main/Hermes-2-Theta-Llama-3-8B-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Hermes-2-Theta-Llama-3-8B-Q4_K_S.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-8B-GGUF/blob/main/Hermes-2-Theta-Llama-3-8B-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. |
| [Hermes-2-Theta-Llama-3-8B-IQ4_XS.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-8B-GGUF/blob/main/Hermes-2-Theta-Llama-3-8B-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Hermes-2-Theta-Llama-3-8B-Q3_K_L.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-8B-GGUF/blob/main/Hermes-2-Theta-Llama-3-8B-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. |
| [Hermes-2-Theta-Llama-3-8B-Q3_K_M.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-8B-GGUF/blob/main/Hermes-2-Theta-Llama-3-8B-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. |
| [Hermes-2-Theta-Llama-3-8B-IQ3_M.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-8B-GGUF/blob/main/Hermes-2-Theta-Llama-3-8B-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Hermes-2-Theta-Llama-3-8B-IQ3_S.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-8B-GGUF/blob/main/Hermes-2-Theta-Llama-3-8B-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [Hermes-2-Theta-Llama-3-8B-Q3_K_S.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-8B-GGUF/blob/main/Hermes-2-Theta-Llama-3-8B-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. |
| [Hermes-2-Theta-Llama-3-8B-IQ3_XS.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-8B-GGUF/blob/main/Hermes-2-Theta-Llama-3-8B-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Hermes-2-Theta-Llama-3-8B-IQ3_XXS.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-8B-GGUF/blob/main/Hermes-2-Theta-Llama-3-8B-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Hermes-2-Theta-Llama-3-8B-Q2_K.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-8B-GGUF/blob/main/Hermes-2-Theta-Llama-3-8B-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. |
| [Hermes-2-Theta-Llama-3-8B-IQ2_M.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-8B-GGUF/blob/main/Hermes-2-Theta-Llama-3-8B-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [Hermes-2-Theta-Llama-3-8B-IQ2_S.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-8B-GGUF/blob/main/Hermes-2-Theta-Llama-3-8B-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. |
| [Hermes-2-Theta-Llama-3-8B-IQ2_XS.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-8B-GGUF/blob/main/Hermes-2-Theta-Llama-3-8B-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. |
| [Hermes-2-Theta-Llama-3-8B-IQ2_XXS.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-8B-GGUF/blob/main/Hermes-2-Theta-Llama-3-8B-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. |
| [Hermes-2-Theta-Llama-3-8B-IQ1_M.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-8B-GGUF/blob/main/Hermes-2-Theta-Llama-3-8B-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. |
| [Hermes-2-Theta-Llama-3-8B-IQ1_S.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-8B-GGUF/blob/main/Hermes-2-Theta-Llama-3-8B-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Hermes-2-Theta-Llama-3-8B-GGUF --include "Hermes-2-Theta-Llama-3-8B-Q4_K_M.gguf" --local-dir ./ --local-dir-use-symlinks False
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Hermes-2-Theta-Llama-3-8B-GGUF --include "Hermes-2-Theta-Llama-3-8B-Q8_0.gguf/*" --local-dir Hermes-2-Theta-Llama-3-8B-Q8_0 --local-dir-use-symlinks False
```
You can either specify a new local-dir (Hermes-2-Theta-Llama-3-8B-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
santoshsawant/code-llama-7b-text-to-sql | santoshsawant | 2024-05-18T16:31:25Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:adapter:codellama/CodeLlama-7b-hf",
"license:llama2",
"region:us"
] | null | 2024-05-18T16:01:09Z | ---
license: llama2
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: codellama/CodeLlama-7b-hf
datasets:
- generator
model-index:
- name: code-llama-7b-text-to-sql
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# code-llama-7b-text-to-sql
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.0
- Pytorch 2.2.0a0+81ea7a4
- Datasets 2.19.1
- Tokenizers 0.19.1 |
thiagoquilice/tweets_deforestation_all_withoutRTorduplicate_new | thiagoquilice | 2024-05-18T16:31:23Z | 5 | 0 | bertopic | [
"bertopic",
"text-classification",
"region:us"
] | text-classification | 2024-05-18T16:31:20Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# tweets_deforestation_all_withoutRTorduplicate_new
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("thiagoquilice/tweets_deforestation_all_withoutRTorduplicate_new")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 715
* Number of training documents: 361202
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | do - que - os - no - com | 50 | Deforestation in the Amazon |
| 0 | peti - assine - impedir - explora - via | 154206 | Stop Deforestation in the Amazon |
| 1 | soja - plantar - planta - macron - cerrado | 45854 | Environmental concerns over Brazilian soy production |
| 2 | isso - aqui - mat - quem - essa | 12141 | Political corruption and its consequences |
| 3 | - - - - | 8228 | "Technology trends and innovations" |
| 4 | petition - sign - the - impedir - explora | 6290 | Stop Deforestation in the Amazon |
| 5 | carne - comer - consumo - gado - veganos | 5150 | Impact of meat consumption on the Amazon rainforest |
| 6 | ajudem - ajudar - custa - vamos - vamo | 3575 | Save the Amazon! |
| 7 | ele - presidente - dele - cara - esse | 2595 | Presidential responsibility for Amazon deforestation |
| 8 | tamanho - leia - sp - quase - legal | 2412 | Deforestation in the Amazon |
| 9 | falar - vamos - carne - sobre - na | 1973 | Deforestation and meat consumption in the Amazon |
| 10 | noruega - alemanha - fundo - financiamento - projetos | 1960 | Funding for environmental projects in Brazil |
| 11 | brasil - brasileiro - brasileiros - presidente - mundo | 1832 | Brazilian President's Handling of Amazon Rainforest Conservation |
| 12 | assinem - assinar - assinado - assina - abaixo | 1714 | Stop Deforestation in the Amazon |
| 13 | alertas - batem - alerta - crescem - aumentam | 1638 | Deforestation alerts in Amazon break records |
| 14 | governo - governos - anteriores - atual - culpa | 1540 | Responsibility for Amazon deforestation |
| 15 | zero - serra - governador - poss - promete | 1398 | Deforestation in the Amazon |
| 16 | investidores - empresas - bancos - trilh - banco | 1351 | Financing of Deforestation in the Amazon |
| 17 | uhul - walmart - anunciou - comprar - carne | 1302 | Walmart's announcement on not purchasing meat from Amazon rainforest deforestation |
| 18 | monitorar - lite - sat - monitoramento - lan | 1282 | Monitoring deforestation in the Amazon using satellite technology |
| 19 | diminuiu - cai - diminui - caiu - ritmo | 1253 | Deforestation in the Amazon decreases |
| 20 | manifesta - protesto - contra - protestar - protestos | 1236 | Protestos contra desmatamento na AmazΓ΄nia |
| 21 | petici - firma - la - impedir - explora | 1156 | Save the Amazon Rainforest |
| 22 | ganha - realidade - prote - dt - florestas | 1070 | Deforestation alerts in Brazil |
| 23 | militares - armadas - militar - for - opera | 1056 | Military efforts to combat deforestation in the Amazon |
| 24 | fogo - queimadas - fuma - chamas - queimada | 1050 | Deforestation and fires in the Amazon |
| 25 | gerando - hora - hectares - era - bolsonaro | 862 | Deforestation under Bolsonaro administration |
| 26 | impedir - explora - via - da - nia | 860 | Preventing deforestation in the Amazon |
| 27 | best - partners - hesitate - raze - loot | 829 | Deforestation and land exploitation |
| 28 | acesse - sabia - hey - saiba - dt | 800 | Ending deforestation in the Amazon |
| 29 | destruction - billions - funds - fires - with | 792 | Deforestation in Brazil |
| 30 | moratorium - soy - moratoria - effective - amazon | 758 | Soy Moratorium in Brazil |
| 31 | desmatamentozero - voltou - crescer - dt - meses | 750 | Deforestation in the Amazon |
| 32 | deixe - continue - absurdo - voltou - crescer | 682 | Deforestation in the Amazon |
| 33 | simples - jeito - entenda - um - via | 676 | Understanding Deforestation in Simple Terms |
| 34 | chuva - chuvas - sudeste - falta - escassez | 648 | Impact of deforestation on rainfall in Brazil |
| 35 | diretor - galv - exonerado - demitido - ricardo | 619 | "Controversy surrounds ex-INPE director Ricardo GalvΓ£o after release of deforestation data" |
| 36 | perdeu - perde - km - metros - quil | 609 | Deforestation in the Amazon |
| 37 | pandemia - pandemias - xima - novas - covid | 604 | Pandemic-related deforestation in the Amazon |
| 38 | aquecimento - global - clima - clim - mudan | 584 | Impact of deforestation on global warming |
| 39 | petizione - firma - la - lewat - ednictv | 573 | Petition to prevent deforestation in the Amazon |
| 40 | fundo - dinheiro - projetos - doa - recursos | 565 | Funding for Amazon conservation projects |
| 41 | hypocrisy - talk - put - funding - did | 562 | Corporate hypocrisy in zero deforestation policies |
| 42 | mico - econ - micos - valor - macaco | 561 | Economic value of deforestation in the Amazon |
| 43 | menor - taxa - desde - segunda - registrada | 556 | Deforestation rates in Brazil |
| 44 | mapa - dados - interativo - novos - mapas | 551 | Real-time maps of deforestation in the Amazon |
| 45 | petition - sign - help - save - please | 546 | Amazon Rainforest Preservation Petition |
| 46 | trav - petici - firma - la - impedir | 526 | Preventing deforestation in the Amazon |
| 47 | tamanho - leia - sp - quase - legal | 516 | Deforestation in the Amazon |
| 48 | senado - vice - hamilton - mour - senador | 490 | Brazilian Senate to discuss deforestation and increase in fires in the Amazon |
| 49 | ela - marido - dela - mulher - falando | 489 | Woman's husband involved in Amazon deforestation |
| 50 | triste - chorar - chorando - me - eu | 484 | Deforestation in the Amazon |
| 51 | petici - firma - la - firm - impedir | 475 | Preventing deforestation in the Amazon |
| 52 | prestes - limite - atingir - irrevers - vel | 475 | Deforestation in the Amazon nearing irreversible limit |
| 53 | acabei - juntos - mp - corrup - dizer | 470 | Combating corruption and deforestation in the Amazon |
| 54 | papel - trouxa - tanto - eu - minha | 469 | Deforestation and paper production |
| 55 | recontar - covid - ativistape - floresta - nica | 464 | Discovery of lost civilizations in the Amazon rainforest during COVID-19 pandemic |
| 56 | cumpadi - mant - not - cias - amea | 450 | Deforestation in the Amazon |
| 57 | menor - ndice - registra - taxa - medido | 446 | Amazon deforestation rates |
| 58 | carbono - xido - emiss - atmosfera - absorver | 433 | Carbon emissions from deforestation in the Amazon |
| 59 | ministro - ambiente - meio - salles - ricardo | 422 | Controversial environmental policies in Brazil |
| 60 | bate - recorde - seguido - bateu - consecutivo | 422 | Deforestation in the Amazon sets new record |
| 61 | uhul - walmart - anunciou - comprar - carne | 408 | Walmart's announcement on not purchasing meat from Amazon rainforest deforestation |
| 62 | comemorar - dia - celebrar - comemora - comemorado | 387 | Celebrating the Amazon rainforest, but with a twist |
| 63 | tition - signez - la - impedir - explora | 384 | Preventing deforestation in the Amazon |
| 64 | acabou - agora - boa - existe - sabemos | 383 | Deforestation in the Amazon |
| 65 | genas - ind - demarca - terras - gena | 376 | Deforestation in Indigenous Territories |
| 66 | janeiro - aumenta - mais - em - na | 369 | Deforestation in the Amazon in January |
| 67 | acabar - parar - tarefa - precisamos - luta | 368 | Stop Deforestation in the Amazon |
| 68 | reduzir - incra - combate - ajudado - recente | 368 | Combating deforestation in the Amazon |
| 69 | entrevista - professor - falar - sobre - aula | 366 | Debate on deforestation in the Amazon |
| 70 | km - mil - quil - metros - quadrados | 360 | Deforestation in the Amazon reaches nearly 1 million square kilometers |
| 71 | brasil - portugu - brasileiro - brasileiros - acabando | 357 | Controversies surrounding Brazilian culture and identity |
| 72 | ambiente - meio - ambiental - vigaristas - ambientalismo | 353 | Environmentalism and activism |
| 73 | previa - apoiou - milhares - protegidas - cies | 352 | Macron's support for Amazon deforestation project |
| 74 | pior - ndice - worst - rie - abril | 352 | Deforestation in the Amazon |
| 75 | mercosul - acordo - ue - europeia - uni | 352 | Environmental concerns in Mercosur-EU trade agreement |
| 76 | multas - indeniza - milh - aplicou - somam | 341 | Environmental fines in Brazil |
| 77 | corta - verba - dilma - contra - medi | 341 | Dilma Rousseff's intervention in the Amazon rainforest |
| 78 | abril - ltimos - maior - foi - anos | 339 | Deforestation in the Amazon in April was the largest in recent years |
| 79 | agosto - cai - caiu - entre - maio | 334 | Deforestation in the Amazon in August |
| 80 | firm - petici - la - impedir - explora | 334 | Preventing deforestation in the Amazon |
| 81 | marina - ministra - silva - ela - marido | 325 | Marina Silva's tenure as Minister of the Environment |
| 82 | estuda - sico - desmonte - levar - irrevers | 323 | Deforestation under Bolsonaro's government |
| 83 | aumentou - aumento - incontest - voltadas - inverte | 320 | Deforestation in the Amazon |
| 84 | setembro - cai - segundo - comp - dados | 320 | Deforestation in September |
| 85 | junho - aumenta - aumentou - cresce - cresceu | 318 | Deforestation in Brazil |
| 86 | dilma - corta - verba - cortou - rousseff | 318 | Dilma Rousseff's presidency and environmental policies |
| 87 | cresce - cresceu - final - uol - luizdomingosdeluna | 309 | Deforestation in the Amazon |
| 88 | levantamento - cerrado - desarticula - emdefesadocerrado - piau | 304 | Deforestation in the Cerrado region |
| 89 | google - ferramenta - ajudar - combater - mostrado | 301 | Google tools for forest monitoring and management |
| 90 | aumentou - ano - passado - aumento - anual | 299 | Deforestation increases annually |
| 91 | peti - assine - impedir - explora - da | 298 | Preventing Deforestation in the Amazon |
| 92 | outubro - aumenta - aumentou - aponta - cresce | 294 | Deforestation in Amazon increases in October |
| 93 | anual - taxa - cai - cerrado - metade | 291 | Deforestation rates in the Cerrado region |
| 94 | mar - abril - atinge - entre - km | 286 | Deforestation in the Amazon |
| 95 | atingido - futebol - campos - atinge - estado | 286 | Deforestation in the Amazon reaches an area of 100 football fields |
| 96 | graba - saliendo - quemada - camiones - una | 283 | Deforestation in the Amazon |
| 97 | desmatou - mt - grosso - mato - estado | 283 | Deforestation in Mato Grosso, Brazil |
| 98 | reduzem - combatem - legalizar - lei - degrada | 283 | Government efforts to reduce deforestation and degredation in the Amazon region |
| 99 | sexta - acre - divulgados - prodes - ltima | 281 | Deforestation rates in Brazil |
| 100 | coronav - rus - pandemia - corona - ximo | 279 | Impact of COVID-19 on Amazonian deforestation |
| 101 | papa - francisco - igreja - vaticano - nodo | 278 | Pope Francis and Amazon rainforest conservation |
| 102 | corriam - saudades - puro - estrelas - mpido | 275 | Loss of natural beauty and environmental degradation under Bolsonaro's presidency |
| 103 | fires - fire - forest - burning - amazon | 272 | Amazon Forest Fires and Deforestation |
| 104 | estradas - rodovia - rodovias - estrada - asfaltamento | 270 | Impact of roads on deforestation in the Amazon |
| 105 | imagens - mostram - fotos - lite - trecho | 268 | Deforestation in the Amazon |
| 106 | abril - maior - favorecer - ltimos - pl | 266 | Deforestation in Amazon in April |
| 107 | maio - aumenta - aumentou - compara - cresce | 263 | Deforestation in May |
| 108 | profund - imperdo - devastador - intenso - sofrendo | 263 | Deforestation and its severe consequences |
| 109 | simples - jeito - entenda - youtube - video | 260 | Understanding Deforestation in Simple Terms |
| 110 | year - last - increased - amazon - in | 259 | Amazon Deforestation Rates |
| 111 | opera - todas - suspende - interrompidas - pantanal | 257 | Environmental policies and regulations in Brazil |
| 112 | julho - rela - cresce - mesmo - ao | 257 | Deforestation in the Amazon in July |
| 113 | vacina - vacinas - cloroquina - covid - compra | 252 | Vaccine controversy in Brazil |
| 114 | sobem - seis - alertas - meses - vgncld | 251 | Deforestation alerts in Amazonia |
| 115 | gay - homofobia - lgbt - gays - racismo | 245 | LGBTQ+ rights and homophobia in Brazil |
| 116 | assinar - assinado - assinem - assinaturas - abaixo | 245 | Protecting the Amazon rainforest through signatures |
| 117 | partners - best - raze - loot - hesitate | 244 | Deforestation and land exploitation |
| 118 | fake - news - fakenews - mentiras - mentira | 243 | Fake News in the Amazon |
| 119 | blog - post - arquivo - blogosfera - cidadania | 239 | Blogosphere and citizen journalism in Brazil |
| 120 | vetar - ligados - importa - produtos - fran | 239 | Deforestation of the Amazon and related products |
| 121 | europa - europeus - totalidade - devastou - preocupada | 238 | European colonialism and environmental destruction |
| 122 | principal - pecu - fator - causa - causas | 236 | Causes of Deforestation in the Amazon |
| 123 | detecta - estima - registra - km - inpe | 235 | Deforestation in the Amazon |
| 124 | fund - destruction - billions - fires - with | 233 | Deforestation in Brazil |
| 125 | marca - chega - hist - ria - maior | 230 | Deforestation in the Amazon reaches record high |
| 126 | colours - follow - projeta - rostos - artista | 229 | Artist's project featuring indigenous faces on tree trunks in the Amazon rainforest |
| 127 | aumenta - sinalizam - refere - ano - dado | 227 | Deforestation in the Amazon increases in one year |
| 128 | aumentou - aumenta - brasileira - brasil - deforestaci | 227 | Deforestation in Brazilian Amazon |
| 129 | varia - km - representando - pio - destaque | 227 | Deforestation in Brazilian Amazon |
| 130 | ranking - lidera - par - estado - grosso | 225 | Deforestation rankings in the Amazon |
| 131 | americana - universidade - dobro - registrado - seria | 222 | Deforestation in the Amazon doubled, according to university study |
| 132 | prayforamazonia - peti - assine - impedir - explora | 220 | Protecting the Amazon Rainforest |
| 133 | acumula - mil - indicar - km - estudo | 219 | Deforestation rates in Amazonia |
| 134 | boicote - boicotar - produtos - brit - varejistas | 210 | Boycott of Brazilian products due to deforestation |
| 135 | pf - opera - deflagra - ilegal - mandados | 210 | Combating Illegal Deforestation in the Amazon |
| 136 | puta - merda - pariu - vc - caralho | 207 | Deforestation in the Amazon |
| 137 | aumenta - na - taubate - divulgapiaui - transmaz | 206 | Deforestation in the Amazon |
| 138 | firmam - acordo - firmar - incra - apresentam | 206 | MPF and INCRAP sign agreement to reduce deforestation in Amazonian settlements |
| 139 | iluminattis - congelada - geleiras - farsa - observar | 205 | Climate Change Denial |
| 140 | julho - passado - compara - rela - cresce | 204 | Growth in July compared to previous years |
| 141 | biden - joe - san - micas - eleito | 202 | Joe Biden and Amazon deforestation |
| 142 | peixes - riachos - encolhe - esquenta - pesca | 201 | Impact of deforestation on fish size in the Amazon |
| 143 | partner - bnpparibas - number - burning - world | 198 | Deforestation partnership with BNPP |
| 144 | sights - sets - danger - palm - oil | 190 | Threats to the Amazon: Brazil's Palm Oil Expansion |
| 145 | desmascara - petista - ticos - imprensa - deo | 189 | Desmantling of Amazonian deforestation exposed by former Petista government minister |
| 146 | leonardo - dicaprio - denuncia - avan - nchen | 189 | Leonardo DiCaprio speaks out against Amazon deforestation |
| 147 | please - sign - petition - this - help | 189 | Save the Amazon Rainforest |
| 148 | liga - seca - novo - estudo - pa | 189 | Deforestation in the Amazon linked to drought |
| 149 | af - comprova - argentina - afeta - perda | 185 | Deforestation in the Amazon affects rainfall in Argentina |
| 150 | ong - nova - integrantes - alta - autua | 185 | Deforestation in the Amazon |
| 151 | horizonte - belo - duas - equivalente - perdeu | 183 | Deforestation in June in Amazonia |
| 152 | gases - estufa - emiss - efeito - emissor | 181 | Deforestation and greenhouse gas emissions in Brazil |
| 153 | argentina - comprova - afeta - chuvas - estudo | 181 | Deforestation in Argentina affects rainfall |
| 154 | amazonia - nfts - aroma - cryptoart - sosamazonia | 178 | Deforestation and CryptoArt in Amazonia |
| 155 | televan - registrar - tvonline - volta - siga | 177 | Legal updates on TV online registration in Brazil |
| 156 | bragging - scandal - disappearance - while - massive | 175 | Environmental scandals: BNP and deforestation |
| 157 | faster - burns - another - destroyed - part | 175 | Amazon rainforest destruction |
| 158 | amazoniasos - prayforamazonas - prayforamazonia - prayfortheamazon - amazonrainforest | 174 | Protecting the Amazon Rainforest |
| 159 | registrada - taxa - menor - renova - baixo | 174 | Deforestation rates in the Amazon |
| 160 | divulga - desmate - alta - agora - governo | 173 | Government announces high desertion rate in Amazon |
| 161 | scandal - bragging - disappearance - while - massive | 173 | BNP's Deforestation Scandal |
| 162 | polui - rica - sul - am - ses | 171 | Deforestation in the Amazon region increases pollution in southern Brazil |
| 163 | corrup - irracional - economicamente - estudo - esperado | 170 | Irresponsible deforestation |
| 164 | anuncia - tev - menor - desde - indicam | 169 | Brazilian government announces reduced deforestation in the Amazon |
| 165 | boletim - setembro - alta - aumenta - imazon | 169 | Deforestation in the Amazon |
| 166 | desafiam - madeireiros - ilegais - combate - bbcembora | 168 | Illegal logging in the Amazon |
| 167 | pedir - desculpa - trouxa - meus - quero | 168 | Responsibility for Amazon deforestation |
| 168 | cidade - desmatada - segundo - imazon - cresce | 167 | Deforestation in the Amazon grows in a year |
| 169 | desmentem - autores - citado - temer - onu | 165 | Desmentem sobre queda na AmazΓ΄nia |
| 170 | apontam - aumentou - aumento - revelam - novos | 164 | Deforestation in the Amazon |
| 171 | extended - extends - moratorium - industry - soy | 164 | Brazilian soy industry's deforestation moratorium |
| 172 | chicken - fed - linked - soya - fast | 163 | "UK Supermarket Chicken Linked to Deforestation in Brazil" |
| 173 | dispara - setembro - agosto - em - na | 162 | Deforestation in the Amazon increases in August and September |
| 174 | futebol - campos - equivalente - minuto - mil | 161 | Deforestation in the Amazon equivalent to over 100 football fields in May |
| 175 | ouro - minera - mining - causado - gold | 161 | Illegal gold mining causes deforestation and environmental damage in the Amazon |
| 176 | tecnologia - tecnologias - ajudam - vigil - combatem | 161 | Technologies for forest monitoring and conservation in the Amazon |
| 177 | divulga - inpe - estima - mundogeo - emiss | 161 | Deforestation in the Amazon |
| 178 | junho - cai - ritmo - caiu - imazon | 159 | Deforestation in June |
| 179 | ciclo - entrar - seca - mortal - pode | 159 | Ciclo de desmatamento na savana |
| 180 | desemprego - desempregados - infla - milh - gasolina | 158 | Unemployment and environmental issues in Brazil |
| 181 | endossar - depender - fran - presidente - diz | 158 | Brazilian President's Stance on Soy Dependence and Deforestation |
| 182 | co - emiss - este - cai - ano | 158 | Deforestation in the Amazon |
| 183 | fhc - argumentos - mil - amazoniasemongs - vejam | 157 | Desmatamento da AmazΓ΄nia e responsabilidade governamental |
| 184 | regra - confirma - reserva - redd - minc | 156 | Brazilian government's new forest reserve rule may lead to increased deforestation in the Amazon |
| 185 | privada - empresa - contratar - monitorar - edital | 152 | Government to contract private company for Amazon deforestation monitoring |
| 186 | indireto - impulsionam - tricas - hidrel - bbcfloresta | 151 | Indirect Impacts of Hydroelectric Power Plants on the Amazon Rainforest |
| 187 | janeiro - queda - tem - cai - diz | 151 | Deforestation in Brazil in January, according to government data |
| 188 | antibi - resistentes - bact - eros - frigor | 150 | Antibiotic resistance in the Amazon rainforest |
| 189 | climatechange - amazonrainforest - brazil - environment - bancadaruralista | 149 | Deforestation in the Amazon and its impact on climate change |
| 190 | escravo - trabalho - mpt - chefe - usado | 149 | Labor abuses and exploitation in Brazilian Amazon |
| 191 | atrasada - reformar - lbum - fotos - veja | 148 | Reforming an outdated cattle farm in Amazonia |
| 192 | impunidade - crime - rights - humanidade - watch | 148 | Human Rights Violations in Amazonian Deforestation |
| 193 | desafios - brasil - efetiva - caminho - zerar | 148 | Challenges in Brazilian Amazon for effective policy against deforestation |
| 194 | amazoniasinextraccion - indigenouspeoplematter - stopamazonextraction - indigenouslivesmatter - justicefortheamazon | 147 | Protecting Indigenous Rights and the Amazon Rainforest |
| 195 | del - comerciante - beneficiarse - degradaci - masiva | 147 | Environmental damage and human rights abuses in Amazonian soy and meat production |
| 196 | mar - abril - cai - aletas - entre | 147 | Deforestation in the Amazon during March to April |
| 197 | meses - cai - rbr - poupados - em | 147 | Deforestation in the Amazon region |
| 198 | segundo - cresce - imazon - ano - um | 147 | Deforestation in the Amazon grows in a year according to IMazon |
| 199 | jetzt - unterschreiben - impedir - explora - this | 146 | Stop deforestation in the Amazon |
| 200 | evita - mortes - seguro - puc - reduzem | 146 | Deforestation in the Amazon and its impact on mortality rates |
| 201 | atingiu - outubro - quil - metros - quadrados | 145 | Deforestation in the Amazon reaches 10,000 square kilometers in October |
| 202 | impeachmentsalvavidas - panelacoforabolsonaro - flavio - abin - loteamento | 145 | Political corruption and impeachment in Brazil |
| 203 | indicam - setembro - alertas - cresce - inpe | 145 | Deforestation in the Amazon increases in September, according to INPE alerts |
| 204 | vestiram - militantes - frica - protesto - greenpeace | 144 | Protest against deforestation in the Amazon |
| 205 | escudos - xingu - dispara - ltimos - dos | 144 | Deforestation in Xingu, Brazil |
| 206 | tamanho - rtrt - sp - leia - quase | 144 | Deforestation in the Amazon |
| 207 | relaciona - pesquisador - seca - causas - especialistas | 143 | Causes of drought in the Amazon rainforest |
| 208 | palm - oil - danger - sights - sets | 142 | Amazon rainforest under threat from Brazil's palm oil ambition |
| 209 | contesta - den - incra - respons - marina | 142 | Responsibility for deforestation in the Amazon |
| 210 | meta - reduzir - metas - emiss - brasil | 141 | Brazil's efforts to reduce greenhouse gas emissions |
| 211 | fontes - fonte - parab - ns - vaivendo | 141 | Sources and references |
| 212 | balan - confirmam - oficiais - generative - sistema | 141 | Blockchain-based art and generative systems |
| 213 | suspens - anuncia - opera - todas - pantanal | 140 | Environmental policies in Brazil |
| 214 | denunciada - mineradora - dona - contamina - criticar | 140 | Mineradora noruega denunciada por contaminaciΓ³n |
| 215 | renovada - morat - ria - prorrogada - maio | 140 | Soy moratorium extended for another year |
| 216 | needed - preserve - still - shows - study | 139 | Brazil's Soy Moratorium: Preserving the Amazon |
| 217 | sinais - voltar - crescer - reutersong - preliminares | 139 | Deforestation in the Amazon: Signs of growth and recovery |
| 218 | registram - inferior - afirma - pantanal - queda | 137 | Deforestation in Brazil's Cerrado and Pantanal regions |
| 219 | dicaprio - leonardo - desafio - adere - denuncia | 137 | Leonardo DiCaprio speaks out against deforestation in the Amazon |
| 220 | schwarzman - steve - donald - apoiador - impulsiona | 137 | Political influence of Steve Schwarzman on environmental policies |
| 221 | maior - seguido - alto - desde - anos | 137 | Deforestation in the Amazon at record high |
| 222 | baleias - baleia - ca - noruega - matan | 137 | Hypocrisy in environmental policies: Norway's whaling and deforestation practices |
| 223 | federal - cia - combate - busca - deflagrou | 137 | Federal police combat illegal deforestation in the Amazon |
| 224 | setembro - respon - alcan - der - km | 137 | Deforestation in the Amazon in September |
| 225 | triplicar - dizem - cientistas - pode - bolsonaro | 136 | Deforestation in the Amazon under Bolsonaro's governance |
| 226 | tribunal - penal - humanidade - haia - apresentada | 136 | Environmental crimes and human rights violations in Brazil |
| 227 | explode - entre - ingl - ranking - isa | 135 | Deforestation in the Amazon |
| 228 | sad - boletim - imazon - referente - dezembro | 135 | Referential documents for sadness |
| 229 | refer - retorno - pesquisador - ponto - chega | 135 | Climate change impacts on Brazilian forestry |
| 230 | stonehenge - misterioso - revela - disp - pedras | 135 | Mysterious Stonehenge-like structure discovered in Amazon rainforest |
| 231 | advancing - track - researchers - frontier - agricultural | 134 | Impact of Brazil's Soy Moratorium on Advancing Agricultural Frontiers |
| 232 | motivos - preservar - florestas - empregos - piloto | 134 | Preservation of Brazilian Forests |
| 233 | conclusivos - cai - tend - queda - dmjeferson | 134 | Deforestation in the Amazon |
| 234 | detectou - sad - quadrados - quil - metros | 133 | Deforestation in the Amazon detected by satellite |
| 235 | coronavirus - diseases - infectious - next - commentary | 133 | Amazon Deforestation and Coronavirus Risk |
| 236 | mar - sobe - ecoc - ong - bate | 133 | Deforestation in the Amazon |
| 237 | odo - junho - per - mesmo - maior | 132 | Deforestation in the Amazon in June |
| 238 | expedi - viagem - realiza - protestar - haszn | 132 | Protesting deforestation in the Amazon |
| 239 | afetar - fim - fiscaliza - fundo - ibama | 131 | Fiscalization of Ibama against deforestation |
| 240 | moon - ki - ban - mundial - quest | 131 | Global forest degradation |
| 241 | futebol - campos - perdeu - sob - dizem | 131 | Loss of football fields due to Bolsonaro's policies |
| 242 | ganha - realidade - prote - dt - florestas | 131 | Deforestation alerts in Brazil |
| 243 | proporcionalmente - duas - vezes - maior - pesquisador | 131 | Deforestation in the Cerrado |
| 244 | forestation - du - financements - politiques - dites | 130 | Political hypocrisy in forestation policies |
| 245 | prayforamazonia - prayforbrazil - prayforrondonia - saveamazonia - prayforamazonas | 130 | Protecting the Amazon Rainforest |
| 246 | lepera - cleo - luciano - anulastf - rede | 130 | Deforestation in Brazil under Bolsonaro administration |
| 247 | lib - toneladas - co - emiss - milh | 127 | Deforestation in the Amazon |
| 248 | organizado - crime - raquel - semin - corrup | 127 | Organized Crime in Brazil |
| 249 | emergenciaclim - sostenibilidad - aloja - siglo - palmadeaceite | 127 | Amazon Rainforest Sustainability |
| 250 | possui - fun - metodologia - causas - especialistas | 127 | Causes of drought in the Amazon rainforest |
| 251 | foradilma - garraseguros - anta - viva - deixar | 126 | Insurance and financial services |
| 252 | vale - plantada - agroflorestas - superam - pesquisador | 126 | Agroforestry in Brazil |
| 253 | account - unavailable - temporarily - violates - learn | 125 | Twitter media policy violations |
| 254 | monitoramento - ministra - nova - acende - mostra | 124 | Minister's statement on Amazon deforestation reduction |
| 255 | cresce - ano - um - ebc - em | 124 | Deforestation in the Amazon grows in a year |
| 256 | toneladas - recupera - reflorestamento - carbono - castanheiras | 124 | Carbon emissions from Amazonian deforestation |
| 257 | petition - sign - cez - prin - the | 123 | Protect the Amazon Rainforest |
| 258 | sacas - colheita - rr - fecha - bbb | 123 | Agricultural production in Brazil |
| 259 | nima - rcio - criminoso - amea - sociedade | 123 | Deforestation in the Amazon region |
| 260 | volta - crescer - voltou - desapareceram - estica | 122 | Deforestation in the Amazon |
| 261 | incra - coloni - assentamentos - promete - diminuir | 121 | INCRAPrometeDiminuirEmDesmatamento |
| 262 | prayforamazonia - prayforamazonas - prayforamazon - peti - assine | 121 | Protecting the Amazon Rainforest |
| 263 | girafas - girafa - sobra - comem - elefantes | 121 | Elephants and soy in the Amazon |
| 264 | esperar - devem - ministra - desmatada - queda | 121 | Environmental policy and regulation |
| 265 | vigiar - radar - vai - web - barreiras | 120 | Monitoring Radar Systems in the Amazon Rainforest |
| 266 | derrubado - deste - registra - foram - conflict | 120 | Deforestation in Brazilian Amazon |
| 267 | outubro - caiu - ministra - compara - passado | 119 | Deforestation in the Amazon in October |
| 268 | cresce - desde - maior - bate - garimpos | 119 | Deforestation in the Amazon region increases and sets new records |
| 269 | crescem - alertas - legal - vermelho - sidnei | 119 | Deforestation alerts in Brazil |
| 270 | eleitoral - explode - durante - odo - per | 118 | Electoral issue: Explosive deforestation in the Amazon during the election period |
| 271 | dobra - dobrou - quintuplicou - quase - janeiro | 118 | Deforestation in the Amazon almost doubled in a year |
| 272 | meses - cai - comemore - perdas - conseguiu | 118 | Deforestation in the Amazon in recent months |
| 273 | bragging - scandal - disappearance - while - massive | 118 | BNP's Deforestation Scandal |
| 274 | aposta - tribo - google - combater - aplicativo | 118 | "Tribal App Uses Google Technology to Combat Deforestation in the Amazon" |
| 275 | liber - concordou - noruega - pagar - mi | 118 | Norway agrees to pay Brazil more than $100 million for Amazon deforestation reduction |
| 276 | entrega - monitorar - lite - resposta - sat | 118 | Brazilian satellite monitoring of deforestation |
| 277 | sacrificar - eleva - demanda - press - crescimento | 118 | Demand for meat and soy affects Amazonian growth |
| 278 | desflorestamento - cnndomingo - estranhamento - ciberia - assentamentos | 118 | Deforestation in the Amazon |
| 279 | carbon - source - change - climate - linked | 117 | Amazon rainforest as a carbon source |
| 280 | caiu - odo - per - legal - cai | 117 | Deforestation in the Amazon legal in one year |
| 281 | comparativo - extens - imagem - revela - nasa | 117 | Deforestation in the Amazon compared through historical images |
| 282 | protesto - vargas - hrs - ae - avenida | 116 | Protests in Rio de Janeiro against deforestation in the Amazon |
| 283 | perdem - rj - estados - traves - val | 116 | Deforestation in Rio de Janeiro |
| 284 | novembro - mensal - aumenta - rela - mesmo | 116 | Deforestation in Amazon increases in November |
| 285 | drones - drone - estudante - provar - desenvolvido | 115 | Use of drones in forest monitoring and detection of deforestation in the Amazon |
| 286 | desmatam - registrada - levantamento - registra - menor | 115 | Deforestation rates in the Amazon |
| 287 | pecu - causa - lt - strong - gt | 115 | Deforestation in the Amazon caused by cattle ranching |
| 288 | aumentou - passado - ano - momentooeco - cresceu | 115 | Deforestation in the Amazon increased last year |
| 289 | bact - diversidade - emagrecerem - riachos - peixes | 115 | Impact of deforestation on aquatic biodiversity |
| 290 | abastecimento - afetar - diminui - planeta - gua | 114 | Water resource management and climate change |
| 291 | diogo - pontos - ntara - alc - tuffani | 113 | Deforestation in Brazil |
| 292 | firm - trav - petici - la - impedir | 113 | Preventing deforestation in the Amazon |
| 293 | multado - flagrante - ms - pma - produtor | 112 | "Produtor multado em R$ mil por desmatamento de Cerrado em MS flagrante" |
| 294 | roraima - rionorte - hemisf - coladobrasil - amelhorfronteiraagr | 112 | Agricultural development in Roraima |
| 295 | winning - war - saving - bbc - news | 112 | Deforestation in Amazonia: BBC News Coverage |
| 296 | evita - redu - mortes - aeronaves - moderniza | 111 | Airplane Modernization Program |
| 297 | barram - supermercados - brasileiros - carne - suspendem | 111 | Brazilian supermarkets suspend beef purchases due to deforestation |
| 298 | recuperar - div - anual - quer - reas | 111 | Recovery of Amazonian lands |
| 299 | ppcdam - preven - plano - controle - frederico | 111 | PPCDAM - Plan for preventing and controlling deforestation in the Amazon |
| 300 | janeiro - aumenta - imazon - cresce - derrubados | 110 | Deforestation in the Amazon in January increases |
| 301 | radar - orbital - melhorar - novo - fiscaliza | 110 | New Orbital Radar to Monitor Deforestation in Amazon |
| 302 | julho - agosto - cai - entre - legal | 110 | Deforestation in the Amazon: Legal Causes and Impacts |
| 303 | limite - atingir - prestes - irrevers - determinado | 109 | Deforestation in the Amazon nearing irreversible limit |
| 304 | zoios - capela - abestado - abre - reduziram | 109 | Deforestation of the Amazon and its impact on local wildlife |
| 305 | gabinete - crise - anuncia - ministra - disparada | 109 | Brazilian government's response to deforestation in the Amazon |
| 306 | scandal - disappearance - bragging - while - massive | 109 | Environmental Scandals - BNP and Deforestation |
| 307 | quina - move - retorna - esteve - oab | 109 | Illegal logging in the Amazon |
| 308 | imprensa - jornal - jornalismo - validadas - jornais | 108 | Media coverage of deforestation in the Amazon |
| 309 | dezembro - rela - cresce - mesmo - cresceu | 108 | Deforestation in the Amazon in December |
| 310 | atinge - taxa - menor - cai - anos | 107 | Deforestation in the Amazon reduces tax rate in recent years |
| 311 | atingem - bate - semestre - sinais - junho | 107 | Deforestation alerts in June reach record high |
| 312 | dispara - setembro - agosto - segurou - folha | 107 | Deforestation in the Amazon |
| 313 | zerar - conseguiu - desmata - diminuir - quer | 107 | Government efforts to reduce illegal deforestation in the Amazon |
| 314 | carv - coibir - pacto - objetivo - empresas | 106 | Illegal Deforestation in the Cerrado |
| 315 | parlamento - holanda - mercosul - holand - rejeita | 105 | Rejection of Mercosur agreement by Dutch parliament due to Brazilian government's Amazon deforestation |
| 316 | fevereiro - fo - jornaloglobo - deste - cresce | 105 | Deforestation in Brazil's Amazon region |
| 317 | aves - extin - amea - cies - esp | 105 | Deforestation threatens Amazonian bird species |
| 318 | seletivo - detecta - crescimento - minist - meio | 105 | Deforestation in the Amazon |
| 319 | rie - rica - menor - agosto - hist | 105 | Deforestation in the Amazon since historical records |
| 320 | intensifica - fiscaliza - ibama - combater - smasher | 105 | IBAMa's efforts to combat illegal deforestation in the Amazon |
| 321 | brasileira - evolu - estiagem - brasil - biodiversidade | 105 | Deforestation in Brazilian Amazon |
| 322 | demanda - press - externa - carnes - eleva | 105 | Demand for meat and soy products increases pressure on the Amazon rainforest |
| 323 | mostram - sobe - cerim - pal - planalto | 105 | Deforestation in the Amazon region based on INPE data |
| 324 | abril - aponta - cai - entre - junho | 104 | Deforestation in Brazil's Amazon region during April to July, as reported by INPE |
| 325 | ten - leva - apresenta - outubro - taxa | 104 | Embargo on Amazon deforestation |
| 326 | verba - monitorar - falta - lack - prometido | 104 | Lack of monitoring resources for deforestation in the Cerrado region |
| 327 | digo - florestal - proposta - discutem - mudan | 104 | Proposed forestry reform in Brazil |
| 328 | atingiu - desmatada - julho - agosto - cai | 104 | Deforestation in the Amazon: Legal Cai Diz Inpe Rea Desmatada |
| 329 | perde - hora - hectares - avan - florestas | 104 | Deforestation in the Amazon |
| 330 | tweet - retweeted - twitter - detremura - tweets | 104 | Deforestation in the Brazilian Cerrado |
| 331 | afirma - avan - imazon - perseu - abramo | 103 | Brazilian Amazon deforestation |
| 332 | chuvas - afeta - distantes - continente - regime | 103 | Deforestation in the Amazon affects rainfall across the continent |
| 333 | este - maior - aponta - desde - enfraquece | 103 | Deforestation in the Amazon this year is the largest since the apocalypse |
| 334 | chuvas - relacionada - seca - drought - escassez | 103 | Impact of Drought on Brazilian Rainforest |
| 335 | amazonorbolsonaro - amazoniaoubolsonaro - amazonia - amazon - heard | 103 | Bolsonaro's Amazon policies |
| 336 | armyhelptheplanet - peti - assine - armyhelptheplane - impedir | 103 | "Army's Efforts to Save the Planet" |
| 337 | tvonline - siga - tv - online - reutersmaranh | 102 | Deforestation in the Amazon |
| 338 | monitoramento - plantio - resultados - morat - bioma | 102 | Monitoring soybean farms in the Amazon region |
| 339 | chantageia - destr - rainha - pa - selva | 102 | Leaked Information on Soja Destruction in Amazonia |
| 340 | rela - agosto - cresce - mesmo - ao | 102 | Deforestation in the Amazon in August increases |
| 341 | reutersentre - ministra - hist - menor - ria | 102 | Deforestation in Brazil's Amazon region |
| 342 | agrofloresta - hectare - reutersrelat - rentabilidade - carnes | 102 | Rentabilidade da Agrofloresta |
| 343 | menores - ocorre - cerca - reduzir - depois | 102 | Deforestation in Brazil |
| 344 | avan - minist - aponta - rio - territ | 102 | Deforestation in the Cerrado region of Brazil |
| 345 | servidores - nota - ibama - cresce - estimam | 101 | Deforestation in the Amazon grows in one year, according to IBAM note |
| 346 | metade - pela - cai - quase - folhaonline | 101 | Deforestation in the Amazon |
| 347 | criminosas - redes - comandado - hrw - impulsionam | 101 | Human rights abuses in Amazonian regions controlled by criminal networks |
| 348 | quatro - volta - ap - crescer - queda | 101 | Deforestation in the Amazon |
| 349 | drica - crise - energ - sudeste - dricas | 101 | Climate crisis and deforestation in Brazil |
| 350 | bate - meses - recorde - cresce - em | 101 | Deforestation in the Amazon breaks records and grows in months |
| 351 | sos - fiscais - liga - estudo - para | 101 | Legal aspects of deforestation in the Amazon |
| 352 | assentamentos - promete - incra - diminuir - surto | 101 | Incra's efforts to reduce deforestation in Amazonian settlement areas |
| 353 | heleno - manipulados - augusto - ndices - ministro | 101 | Ministro Augusto Heleno sobre Γndices de desmatamento na AmazΓ΄nia |
| 354 | publicam - questionar - aberta - manifesto - comemora | 100 | Scientists question government data on deforestation in Brazil |
| 355 | desapropria - disputa - dallagnol - latif - olho | 100 | Land disputes in the Amazon |
| 356 | ditaduranuncamais - porcento - jairbolsonaro - impeachmentdebolsonaro - somos | 100 | Impeachment of Jair Bolsonaro and environmental issues |
| 357 | assinatura - focos - nasa - ndio - inc | 100 | Deforestation in the Amazon |
| 358 | impeachmentsalvavidas - flavio - abin - loteamento - interfer | 100 | Impeachment and political corruption in Brazil |
| 359 | plano - ppcerrado - preven - pretende - controle | 99 | Environmental policy and regulation in Brazil |
| 360 | mar - aumentou - imazon - monitoramento - marina | 99 | Deforestation in the Amazon |
| 361 | somam - mi - erra - temporada - relativo | 99 | Environmental impacts of deforestation |
| 362 | girafa - girafas - desenhou - pintou - elefantes | 99 | Deforestation and Girafa's Fugitive Journey |
| 363 | tchausalles - brasilpedesocorro - ecossistemabrasileiro - terrasp - desmatamentonaamaz | 99 | Deforestation and its Impacts on the Brazilian Ecosystem |
| 364 | desordenada - deixando - ocupa - conserva - efeito | 99 | Deforestation and its effects on the environment |
| 365 | apoiou - previa - milhares - macron - protegidas | 98 | Macron's support for deforestation project |
| 366 | peasant - economy - amazonian - shown - property | 98 | Illegal land ownership in the Amazon |
| 367 | incra - mpf - denuncia - respons - acusa | 98 | MPF denounces INCRA for responsibility in Amazon deforestation |
| 368 | trampascontraelclima - suministro - fabricaci - cadena - llevas | 98 | Sustainable sourcing of soy in Brazilian agriculture |
| 369 | outubro - comparado - cai - novembro - reduz | 97 | Deforestation in the Amazon in October |
| 370 | bioma - extin - cerrado - ritmo - cies | 97 | Deforestation in the Cerrado biome |
| 371 | pas - compara - julho - segundo - rela | 97 | Deforestation in the Amazon in July |
| 372 | registrado - menor - legal - diz - natureza | 97 | Deforestation in the Amazon: Legal and Regulatory Issues |
| 373 | junho - aumenta - cresce - comparado - recuo | 97 | Deforestation in June |
| 374 | chega - setembro - aumenta - km - quil | 97 | Deforestation in the Amazon increases and reaches km in September |
| 375 | igual - perda - mata - sobe - ong | 97 | Deforestation in the Amazon |
| 376 | macronfake - macronliar - fundoeleitoralpraamazonia - campeign - somostodosricardosalles | 97 | "Fighting Fake Macron and Illegal Deforestation in the Amazon" |
| 377 | inflama - retornou - desemprego - economias - juros | 97 | Economic and Social Impacts of Bolsonaro's Governance in Brazil |
| 378 | comercializar - renovada - produzida - desmatamentos - compromisso | 96 | Commercialization of new soy productions and deforestation |
| 379 | tamanho - sp - quase - foi - legal | 96 | Deforestation in the Amazon |
| 380 | perdem - deveriam - cadas - cidades - desafio | 96 | Deforestation in the Amazon |
| 381 | sou - dosbrasileirosedoplanetaterralongedeserdestegoverno - arquivamaia - aamaz - nativa | 96 | Protection of Native Forests |
| 382 | agosto - aumentou - aumenta - inpe - ciencia | 96 | Deforestation in the Amazon increases in August, according to INPE |
| 383 | timesde - arqueol - descobertas - desenhos - gicas | 96 | Archaeological discoveries in the Amazon rainforest |
| 384 | caem - alertas - legal - descapitaliz - impe | 95 | Deforestation alerts in Brazilian Amazon |
| 385 | comprovamos - governos - pt - cresceu - marcelo | 95 | Deforestation in Brazil under President Marcelo's governments |
| 386 | cai - entre - desorganizada - problemagrave - tmo | 95 | Deforestation in the Amazon |
| 387 | diminuiu - ltima - cada - legal - ebc | 95 | Deforestation in Brazil |
| 388 | oito - alem - caiu - estudo - anos | 95 | Deforestation in the Amazon reduced by 8 years, according to study |
| 389 | repassa - prev - us - verba - noruega | 95 | Norway returns USD millions to Brazil due to deforestation |
| 390 | subiu - alerta - ong - brasileira - terra | 95 | Deforestation in Brazilian Amazon raises alarm from NGOs |
| 391 | bulldozed - changed - everything - then - ago | 95 | Deforestation of the Amazon |
| 392 | neste - dispara - chega - dois - quase | 95 | Deforestation in the Amazon region |
| 393 | fevereiro - aumentou - legal - febrero - aumenta | 94 | Deforestation increases in Amazon legal area in February |
| 394 | registra - maio - hist - taxa - ria | 94 | Deforestation in the Amazon |
| 395 | cai - caiu - contram - mexeu - queda | 94 | Deforestation in the Amazon |
| 396 | cedo - dobra - marca - dez - pior | 93 | Deforestation in the Amazon |
| 397 | irrepar - recupera - especialista - perda - ambiental | 93 | Deforestation and environmental recovery |
| 398 | comparada - dobro - quebra - natureza - janeiro | 93 | Deforestation alerts in the Amazon |
| 399 | culpa - culpados - culpado - culpar - pelo | 93 | Responsibility for Amazon deforestation |
| 400 | curva - fora - sinal - inclui - ponto | 93 | Out-of-control curve point |
| 401 | registrada - derrubada - bras - rvores - lia | 93 | Deforestation in Brazil |
| 402 | madeira - compram - compra - ses - criticam | 93 | Illegal logging in the Amazon |
| 403 | estuda - bbc - sico - desmonte - levar | 92 | Deforestation in Brazil under Bolsonaro's government |
| 404 | setembro - cresce - cartacapital - rea - segundo | 92 | Deforestation in the Amazon in September |
| 405 | etanol - montadoras - manchar - impulsionar - europeias | 92 | Ethanol industry's environmental impact |
| 406 | espaciais - pesquisas - instituto - nacional - cresceu | 92 | Deforestation rates in Brazil's Amazon region |
| 407 | odo - junho - per - rela - quase | 92 | Deforestation in the Amazon in June |
| 408 | aumentou - hello - independente - isto - aumenta | 92 | Deforestation in the Amazon |
| 409 | impedir - explora - assintura - prin - eficazes | 92 | Preventing deforestation in the Amazon |
| 410 | filme - tvonline - tv - co - emiss | 92 | Media coverage of environmental issues in Brazil |
| 411 | justa - saia - doador - visita - temer | 92 | Deforestation in Norway |
| 412 | continua - segue - vapor - lideres - enquanto | 91 | Deforestation in the Amazon continues |
| 413 | greenpeace - letreiro - ina - denuncia - estorninosaporkaka | 91 | Greenpeace campaign against Amazon deforestation |
| 414 | ilegal - bnc - fruto - ltima - madeira | 91 | Illegal deforestation in the Amazon |
| 415 | microsoft - artificial - intelig - previsia - ferramenta | 91 | Microsoft Previsia AI Platform for Amazon Rainforest Monitoring |
| 416 | seguran - nacional - combater - for - ativid | 90 | Combating illegal deforestation in the Amazon |
| 417 | deforestation - choc - dari - incluyen - amazonia | 90 | Deforestation in the Amazon and other regions, including choc and dari, affecting the environment and local communities. |
| 418 | atrapalhar - acusam - fiscais - militares - combate | 90 | Military involvement in deforestation |
| 419 | menor - lowest - taxa - registra - thru | 90 | Brazil achieves lowest deforestation rate in years |
| 420 | aceit - mour - contrariando - nimo - al | 89 | Deforestation in the Amazon region |
| 421 | dinossauros - corriam - puro - estrelas - brilhavam | 89 | Lost paradise of a pristine past |
| 422 | perde - desde - km - maior - floresta | 89 | Deforestation in the Amazon |
| 423 | petici - firma - amazonasenllamas - amazonas - prayforamazonas | 89 | Protecting the Amazon Rainforest |
| 424 | sobe - natureza - meses - ltimos - ong | 89 | Deforestation in the Amazon |
| 425 | financiar - acusa - lula - ong - bbc | 89 | Lula government accused of indirectly financing Amazon deforestation through BNDES |
| 426 | ministra - ditos - izabella - anuncia - dados | 89 | Minister announces data on deforestation in the Cerrado and Amazon |
| 427 | raquel - organizado - afirma - crime - respons | 88 | Organized Crime and Deforestation in the Amazon |
| 428 | brian - assumam - mier - estimulando - estar | 88 | Deforestation in Brazil under Bolsonaro's administration |
| 429 | terraviva - discutido - cop - sustent - ampliaram | 88 | Deforestation in the Amazon |
| 430 | soy - companies - tackle - cerradomanifesto - initiative | 88 | Companies' efforts to reduce soy-driven deforestation in Brazil's Cerrado region through the Cerrado Manifesto initiative. |
| 431 | tratora - retardo - prevarica - precoce - tratamento | 88 | "Government Incompetence in Healthcare: Bolsonaro's Impact" |
| 432 | dobra - recursos - combater - salles - dobrar | 88 | Government efforts to combat deforestation in the Amazon |
| 433 | alegre - segura - preocupante - porto - estimativa | 87 | Deforestation in the Amazon region |
| 434 | cai - ano - legal - um - protetores | 87 | Deforestation in the Amazon |
| 435 | setembro - feedbrasil - governa - realizado - agosto | 87 | Deforestation in Brazil's Amazon region |
| 436 | motivos - futuro - ado - dispara - entenda | 87 | Deforestation in the Amazon: Understanding the Motivations and Future of the Forest |
| 437 | escudos - xingu - principais - dispara - dos | 87 | Deforestation in the Xingu River Basin |
| 438 | abandonar - macron - salvar - europa - precisa | 86 | Macron urges Europe to abandon Brazilian soy to save the Amazon |
| 439 | trico - apag - risco - crescer - el | 86 | Deforestation in Brazil |
| 440 | tecnologias - ajudam - vigil - sete - desflorestamento | 86 | Use of technology in controlling deforestation in the Amazon in seven years |
| 441 | boicote - plantio - atualizada - empresas - aumenta | 85 | Soya plantation in Amazon rainforest despite company boycotts |
| 442 | afetou - degradadas - desmate - mt - solu | 85 | Affect of soy on Amazon deforestation |
| 443 | justi - denuncia - incra - mpf - exonerada | 85 | MPF denounces INCRA's responsibility for Amazon deforestation |
| 444 | ritmo - anuncia - siga - online - tvonline | 85 | Online TV coverage of football matches |
| 445 | lbum - equivale - lise - fotos - vezes | 85 | Deforestation in the Amazon |
| 446 | cai - ano - legal - olhar - direto | 85 | Deforestation in the Amazon legal in one year |
| 447 | retalia - mulo - corta - verba - alemanha | 84 | Brazil-Germany relations and environmental issues |
| 448 | televis - satiriza - humor - criticando - hor | 84 | Satirical television program criticizing Brazilian government's environmental policies and deforestation in the Amazon |
| 449 | oficiais - temperatures - apontam - biologically - amazaon | 84 | Deforestation in the Amazon Cerrado region affecting temperature levels |
| 450 | caiu - motivado - desapareceu - incid - cai | 84 | Deforestation in the Amazon |
| 451 | hashtag - twitter - instagram - tweet - hashtags | 84 | Influencers and Hashtags in Social Media |
| 452 | dilmacadeodinheirodopovo - daehyun - eptv - hithotbr - ka | 84 | Deforestation in the Amazon |
| 453 | cerveja - colorado - pre - barril - lvora | 84 | Cerveja Colorada: PreΓ§os Variaveis por Desmatamento |
| 454 | sofreram - degrada - fica - ong - mil | 84 | Deforestation in the Amazon |
| 455 | cinco - vezes - perdeu - quil - maio | 83 | Deforestation in Brazil |
| 456 | conectadonoplaneta - sepultura - somosamaz - florestaamaz - fique | 83 | Protecting the Amazon rainforest and its indigenous communities |
| 457 | combatem - internet - usando - ndios - notebooks | 83 | Indigenous use of the internet to combat deforestation in the Amazon |
| 458 | digo - florestal - novo - cresce - pseudob | 83 | Deforestation in the Amazon |
| 459 | feita - quadrados - compara - cai - entre | 83 | Deforestation in the Amazon |
| 460 | messias - rep - blica - jair - candidato | 83 | Jair Bolsonaro's presidency and environmental policies |
| 461 | pequenas - lbum - propriedades - tornar - fotos | 83 | Decorating Small Properties with Sustainable Materials |
| 462 | tank - went - fields - water - vital | 83 | Deforestation in Brazil's Cerrado region and its impact on water resources |
| 463 | perde - hora - hectares - avan - florestas | 82 | Deforestation in Brazil |
| 464 | comercializar - prorroga - moratoria - renewed - indefinitely | 82 | Brazil extends soy commercialization moratorium in Amazon |
| 465 | fase - controle - plano - nova - este | 81 | Deforestation control plan in the Amazon |
| 466 | metas - apresenta - plano - reduzir - disregarded | 81 | Government plan to reduce deforestation in the Amazon |
| 467 | repress - seguran - blica - autoriza - nacional | 81 | Combating Illegal Deforestation |
| 468 | detecta - imazon - agosto - aumento - agencia | 81 | Deforestation in the Amazon in August |
| 469 | prever - interoce - rasga - rodovia - toman | 80 | Deforestation in the Amazon rainforest |
| 470 | presidenta - destaca - oito - denunciam - maranhense | 80 | Dilma Rousseff on deforestation in Amazon |
| 471 | conacer - contribui - re - evitar - ne | 80 | Conservation efforts in Brazilian savannah |
| 472 | maia - zeraria - ideia - concretas - passa | 80 | Bolsonaro's stance on Amazon deforestation |
| 473 | economia - empregos - renda - emprego - economico | 80 | Economic impact of deforestation in the Amazon |
| 474 | alcan - taxa - menor - cai - anos | 80 | Deforestation in the Amazon and tax rates |
| 475 | bate - recorde - brasileira - atinge - abril | 80 | Deforestation in Brazil reaches record high in April |
| 476 | simples - jeito - entenda - um - seriu | 80 | Understanding Deforestation in Simple Terms |
| 477 | supermercados - boicote - alem - pede - supermercado | 79 | Boycott of German supermarkets in Brazil due to Amazon deforestation |
| 478 | registra - maior - anual - desde - taxa | 79 | Deforestation rates in Brazil |
| 479 | rec - filme - tvonline - cultivo - desmatadas | 79 | Deforestation and Soya Farming in the Amazon |
| 480 | imagem - revela - nasa - divulga - rica | 79 | Deforestation in the Amazon revealed through historical NASA images |
| 481 | distantes - afeta - chuvas - ses - estudo | 79 | Deforestation in Alagoas affects rainfall in distant areas, according to study |
| 482 | partner - bankofamerica - bofa - number - burning | 79 | Deforestation partnership with Bank of America |
| 483 | calor - extremo - expor - brasileiros - milh | 78 | Impact of deforestation on Brazilian climate |
| 484 | perto - chega - sobe - mil - estad | 78 | Deforestation in the Amazon observed at record high in recent years |
| 485 | aw - tribo - brit - gojira - amea | 78 | Illegal deforestation threatens indigenous tribe |
| 486 | desmatadas - duas - paulo - semestre - estadao | 78 | Deforestation in Brazil |
| 487 | abelhas - nativas - renda - gera - cria | 78 | Beekeeping in the Amazon |
| 488 | novembro - cresce - aponta - guinada - inpe | 77 | Deforestation in Amazon grows in November |
| 489 | metade - cies - esp - rvores - amea | 77 | Deforestation in the Amazon |
| 490 | fevereiro - atingiu - km - agencia - msn | 77 | Deforestation in Amazon reaches record high in February |
| 491 | ocorre - internautas - estende - vizinhos - pergunta | 77 | Deforestation in Brazil |
| 492 | preliminares - indicam - vios - queda - recorde | 77 | Deforestation in the Amazon |
| 493 | vale - biodiversidade - nobre - vozes - defende | 77 | Defending biodiversity in the Amazon |
| 494 | prayforamazonas - peti - assine - impedir - explora | 76 | Protecting the Amazon Rainforest |
| 495 | divulga - imazon - degrada - boletim - florestal | 76 | Amazon Deforestation and Degradation |
| 496 | cresce - legal - maracaju - speed - registradas | 76 | Deforestation in Brazilian Amazon |
| 497 | errados - dados - imprecisos - cresceu - falsos | 76 | Controversy over deforestation data in Brazil |
| 498 | quil - metros - perdeu - mar - abril | 76 | Deforestation in the Amazon |
| 499 | fracassam - miss - militares - conter - receberam | 76 | Military efforts to contain deforestation in the Amazon |
| 500 | paralisar - falta - salles - recursos - verba | 75 | Urgent need for resources to combat deforestation in the Amazon |
| 501 | check - latest - article - thanks - edition | 75 | Deforestation in Brazil |
| 502 | erramos - rebate - divulga - pecu - encobrem | 75 | Deforestation in the Amazon caused by cattle ranching |
| 503 | agosto - cai - primeirojornal - em - na | 75 | Deforestation in the Amazon in August |
| 504 | unidades - conserva - cresce - mostram - lite | 75 | Deforestation in Amazonian conservation units |
| 505 | armyhelptheplanet - escudo - virtual - consci - armysavetheplanet | 74 | "Army Helps the Planet" |
| 506 | garraseguros - novas - menor - registra - unidades | 74 | Brazilian government announces lower deforestation rate and new conservation units |
| 507 | explode - dobro - anterior - quase - explodiu | 74 | Deforestation in Brazil |
| 508 | repost - instagood - hbo - instamood - mfa | 74 | Deforestation in Brazil |
| 509 | assinem - manas - peti - assine - galera | 74 | Protecting the Amazon rainforest |
| 510 | antecipadamente - avisa - far - opera - onde | 74 | Environmental Sustainability: Ibama's Efforts Against Deforestation in the Amazon |
| 511 | supera - julho - cai - anual - compara | 74 | Deforestation in the Amazon in July |
| 512 | pirulla - abaixo - nasa - dia - hil | 73 | Deforestation in the Amazon |
| 513 | metros - quadrados - quil - foram - derrubados | 73 | Deforestation in Brazil |
| 514 | aves - borboletas - extin - risco - parrototd | 73 | Threats to Amazonian bird species due to deforestation |
| 515 | minera - respons - entre - foi - vel | 73 | Deforestation in the Amazon |
| 516 | reduced - dramatically - moratorium - soy - brazil | 73 | Brazil's Soy Moratorium and Deforestation Reduction |
| 517 | portas - ampliar - abre - congresso - para | 72 | Expansion of deforestation in the Amazon |
| 518 | chega - quarto - devastada - aumenta - km | 72 | Deforestation in the Amazon increases and reaches km |
| 519 | fevereiro - aumentou - legal - uol - abc | 72 | Deforestation in Brazil's Amazon region in February |
| 520 | agosto - julho - quase - cresce - entre | 72 | Deforestation in the Amazon grows almost between August and July, according to INPE |
| 521 | estimam - triplicar - cen - cientistas - pode | 72 | Deforestation in the Amazon under Bolsonaro administration |
| 522 | bife - prato - explica - seu - como | 72 | Deforestation and beef consumption |
| 523 | julho - tend - anual - diminui - atuam | 72 | Deforestation in the Amazon: Annual increase despite July decrease |
| 524 | mentira - verde - grande - al - destrui | 71 | Environmental destruction through greenwashing |
| 525 | publico - incra - federal - minist - aponta | 71 | Brazilian government responsibility for Amazon deforestation |
| 526 | mpf - temer - anuncia - veis - respons | 71 | Temer's government announces responsibility for illegal deforestation in the Amazon |
| 527 | rvio - impedir - porfa - explora - nilto | 71 | Protecting the Amazon Rainforest |
| 528 | seca - sudeste - chuca - causada - centro | 70 | Deforestation in the Amazon region causing drought in the southeast |
| 529 | impeachment - impeachmentbolsonaro - impeachmentbolsonarourgente - pedidos - impeach | 70 | Impeachment of Bolsonaro and its impact on Amazonian deforestation |
| 530 | relat - cocaine - ilegal - bolivia - drugcartels | 70 | Drug cartels and illegal activities in the Amazon rainforest |
| 531 | ministra - izabella - teixeira - aumentou - assegura | 70 | Minister Izabella Teixeira on deforestation in the Amazon |
| 532 | comenta - ministra - ambiente - meio - dados | 69 | Environmental policies and comments by Minister Isabella Teixeira on deforestation in the Cerrado region. |
| 533 | novembro - monito - alta - refere - imazon | 69 | Deforestation in the Amazon in November |
| 534 | oculto - patrocina - sos - fiscais - dinheiro | 69 | Illegal financial activities in the Amazon |
| 535 | gelada - mentira - aquecimento - fosse - global | 68 | Climate change denial |
| 536 | bater - volta - recorde - cresce - folha | 68 | Deforestation in the Amazon |
| 537 | friends - share - with - uol - folha | 68 | Deforestation in the Amazon |
| 538 | tamanho - sp - quase - cias - legal | 68 | Deforestation in the Amazon |
| 539 | junho - aumentou - imazon - diz - vemprarua | 68 | Deforestation in Amazon increases in June |
| 540 | deslocamento - causou - aves - morte - milh | 68 | Deforestation in the Amazon causes bird deaths/displacement |
| 541 | recorrente - exposto - perde - hora - hectares | 68 | Deforestation in the Amazon region |
| 542 | registrar - volta - cresceu - passado - agosto | 68 | Deforestation in the Amazon |
| 543 | petici - firmen - firma - firmate - petizione | 68 | Petition to prevent deforestation in the Amazon |
| 544 | caem - alertas - legal - permanecem - inaceit | 67 | Deforestation alerts in Brazilian Amazon |
| 545 | agu - cobra - bilh - legal - infratores | 67 | Illegal deforestation in the Amazon |
| 546 | przez - petition - sign - the - ctvom | 67 | Petition for Amazon rainforest conservation |
| 547 | vig - conto - escala - planet - pan | 67 | Deforestation in the Amazon |
| 548 | fascista - fascismo - nazista - infralegal - esconder | 67 | Fascist and Nazi Influences in Brazilian Politics |
| 549 | frigor - ficos - zerar - ajudar - reduziu | 67 | Frigorificos podem ajudar a reduzir desmatamento na AmazΓ΄nia |
| 550 | apontou - apresentou - amap - quil - quadrados | 67 | Deforestation in Brazil |
| 551 | found - massive - brazil - deforestation - in | 67 | Deforestation in Brazil's Cerrado |
| 552 | detectou - imazon - quil - metros - quadrados | 67 | Deforestation in the Amazon detected by Imazon |
| 553 | companhia - conter - ambientais - opera - ter | 66 | Environmental consulting services |
| 554 | obrasilfelizdenovo - oambiental - agroneg - prote - cio | 66 | Climate change and environmental protection in agriculture |
| 555 | calor - frio - inverno - infernal - quente | 66 | Impact of deforestation on climate |
| 556 | bbc - mentira - verde - news - grande | 66 | Deforestation in Brazil |
| 557 | desaba - vira - boa - nova - andinos | 66 | Deforestation in the Amazon |
| 558 | dodge - organizado - raquel - crime - respons | 65 | Organized Crime and Deforestation in the Amazon |
| 559 | ibama - opera - realiza - inicia - megaopera | 65 | Brazilian government's efforts to combat deforestation in the Amazon |
| 560 | desarticula - quadrilha - grilagem - grge - opera | 65 | Deforestation and Grilagem in the Amazon |
| 561 | camiones - quemada - saliendo - con - una | 65 | Transportation of soy from Amazon to Quemada through Portuario |
| 562 | armyhelptheplanet - petition - sign - armysavetheplanet - save | 65 | Save the Amazon Rainforest |
| 563 | traders - commodity - food - associated - region | 65 | Big Food Companies Urge Commodity Traders to Avoid Deforestation in Brazil |
| 564 | agosto - cresce - rela - novembro - catarinense | 65 | Deforestation in Brazil's Amazon region in August increases |
| 565 | saved - corporate - pledges - won - he | 65 | Corporate pledges to save Brazil's Cerrado forests |
| 566 | proposta - diretor - digo - florestal - mudan | 65 | Proposed forestry law increases deforestation in Amazon, says Ibama director |
| 567 | decreta - escrevo - prezado - armadas - deputado | 65 | Illegal deforestation in Brazil |
| 568 | emiss - desarticula - quadrilha - cai - pf | 64 | Deforestation in the Amazon |
| 569 | financeira - debates - cita - debate - trump | 64 | Biden's stance on Amazon rainforest deforestation and financial aid to Brazil during presidential debates |
| 570 | alerta - echos - jornal - aumento - denuncia | 64 | Deforestation in the Amazon |
| 571 | copia - proposta - pt - contra - ressuscitar | 64 | Bolsonaro's environmental policies |
| 572 | huelgamundialporelcambioclim - ceniza - reduciendo - rojas - tienen | 64 | Deforestation and its Impact on Climate Change |
| 573 | sofre - crescente - consecutivo - degrada - aumentou | 64 | Deforestation and its consequences |
| 574 | mercosul - ratificar - acordo - ue - merkel | 63 | Germany may not ratify Mercosur agreement due to Amazon deforestation concerns |
| 575 | falhas - gest - press - protegidas - reas | 63 | Deforestation in the Amazon: Study Finds Management Failures |
| 576 | celulares - antigos - google - usa - smartphones | 63 | Google uses old smartphones to monitor deforestation in Amazon |
| 577 | philip - emiss - fearnside - paulo - impulsiona | 63 | Deforestation in the Amazon |
| 578 | amazoniasos - amazoniaemchamas - amazoniaenossa - amazonialife - amazoniabrasileira | 63 | Protecting the Amazon Rainforest |
| 579 | aumentar - imazon - estudo - amozonia - vai | 63 | Deforestation in the Amazon |
| 580 | bife - prato - explica - garantida - seu | 63 | Deforestation and the beef industry |
| 581 | coopera - tratado - organiza - pretende - monitorar | 63 | Monitoring deforestation in the Amazon |
| 582 | winning - war - saving - on - deforestation | 63 | Protecting Amazonia: The War on Deforestation |
| 583 | gases - emiss - reduziu - estufa - redu | 62 | Reducing greenhouse gas emissions through deforestation prevention |
| 584 | ig - passado - cresce - ano - rela | 62 | Deforestation in the Amazon increases yearly |
| 585 | mitos - rostos - projeta - verdades - artista | 62 | Myths and Truths of Indigenous Culture |
| 586 | registrado - perdeu - menor - agosto - entre | 62 | Deforestation in Brazil |
| 587 | stop - amazonia - sustainableamazonnetwork - stopping - need | 62 | Protecting the Amazon Rainforest |
| 588 | velho - porto - pio - munic - frien | 62 | Deforestation in Porto Velho, Brazil |
| 589 | temperatura - elevar - pode - aumentar - temperature | 62 | Deforestation and temperature increase |
| 590 | combat - causas - lo - como - inimigos | 61 | Deforestation in the Amazon: Causes and Combating Strategies |
| 591 | diminui - meses - taxa - recuou - onze | 61 | Deforestation rates in the Amazon decrease in months |
| 592 | reuters - paulo - avan - destrui - aumentou | 61 | Deforestation in the Amazon |
| 593 | coibir - manda - autoriza - general - dentro | 61 | Deforestation policies under Bolsonaro |
| 594 | custo - custar - hectare - caro - milh | 60 | Cost of deforestation in the Amazon |
| 595 | lobo - guar - dula - ilustrar - escolhido | 60 | Wolves in the Cerrado |
| 596 | cresce - ano - um - em - sbtbrasil | 60 | Deforestation in Brazil |
| 597 | estadoimagens - ag - lite - mostram - cai | 60 | Deforestation in the Amazon |
| 598 | reduz - reduziu - dez - tedxm - gilberto | 60 | Brazil reduces deforestation in the Amazon |
| 599 | sobe - ong - meses - ltimos - nos | 60 | Deforestation in the Amazon revealed through social media |
| 600 | segundo - cai - bimestre - entre - inpe | 60 | Deforestation in the Amazon |
| 601 | cai - inpe - legal - diz - respeitado | 60 | Deforestation in the Amazon: Legal and Regulatory Aspects |
| 602 | saltou - gisele - ndchen - chora - queria | 60 | Gisele BΓΌndchen speaks out against deforestation in the Amazon |
| 603 | flagra - avi - feito - ibama - folha | 60 | Deforestation in the Amazon |
| 604 | isa - triplo - genas - terras - ind | 60 | Deforestation in Brazilian Indigenous Lands |
| 605 | semestre - primeiro - mostra - ltimos - imazon | 60 | Deforestation in Brazil |
| 606 | tulo - reduz - dar - aos - terras | 60 | Land titling for indigenous communities in Brazil |
| 607 | novembro - imazon - aumentou - mensal - bateu | 60 | Amazon deforestation increases in November |
| 608 | noruegu - sucesso - estagnado - reconhece - relat | 60 | Success of Brazil's efforts to combat deforestation in the Amazon |
| 609 | seis - bate - mar - recorde - ltimos | 60 | Deforestation in the Amazon breaks records |
| 610 | agentes - ataques - ibama - explos - equipes | 60 | Attacks on environmental agents in Brazil |
| 611 | animalplanet - salvalaselva - savehabitat - sosanimals - deforestacion | 59 | Deforestation in the Amazon and its impact on wildlife |
| 612 | acumulado - instituto - natureza - aumentou - meses | 59 | Deforestation in the Amazon increases accumulation over months, according to Institute of Nature. |
| 613 | prop - stria - zero - partiu - mpf | 59 | Zero deforestation agreement for meat industry in Brazil |
| 614 | futebol - motivos - campos - tema - tend | 59 | Deforestation in the Amazon: Motivations and Concerns |
| 615 | verbete - ecossistemabrasileiro - terrasp - desmatamentonaamaz - blicas | 59 | Deforestation in Brazil |
| 616 | office - grileiro - home - abril - sudr | 59 | Deforestation in the Amazon grows in April, laments grileiro |
| 617 | cresce - indica - neste - meses - aponta | 59 | Deforestation in the Amazon region |
| 618 | verde - opera - repressivas - preventivas - informativo | 59 | Environmental law enforcement in Brazil |
| 619 | fevereiro - instituto - cresce - diz - dilmapedeprasair | 59 | Deforestation in Brazil in February according to institute |
| 620 | nova - alta - regi - divulgada - nica | 59 | Deforestation in the Amazon region |
| 621 | policial - fant - exibir - domingo - stico | 59 | Police operation against deforestation in the Amazon on Sunday |
| 622 | aumentar - estudo - atestaram - vai - cidades | 59 | Deforestation in the Amazon to increase |
| 623 | cresce - iirsa - jornaldacbn - okariri - russa | 58 | Deforestation in the Amazon |
| 624 | far - opera - reduzir - ibama - fiscaliza | 58 | IBAM Far Fiscalization to Reduce Deforestation in the Amazon |
| 625 | bl - wwf - embora - relat - perdeu | 58 | Dilma Rousseff's environmental policies |
| 626 | filme - pata - boi - filmes - sob | 58 | Deforestation in the Amazon and its connection to cattle ranching |
| 627 | minist - cai - quase - rio - ano | 58 | Deforestation in the Amazon |
| 628 | novembro - sobe - janeiro - entre - intervalo | 58 | Deforestation in the Amazon during November and January |
| 629 | pecu - respons - pesquisadora - ria - vel | 58 | Responsibility of cattle ranching in Amazon deforestation |
| 630 | isolado - analistas - veem - reais - crise | 58 | Brazil's economic and social challenges during COVID-19 pandemic |
| 631 | anuncia - ministra - ilegal - contra - camposrep | 58 | Brazilian government takes action against illegal deforestation in the Amazon |
| 632 | checa - confere - prato - plataforma - uol | 58 | Deforestation in Brazil under Bolsonaro administration |
| 633 | pe - alimentos - produzir - poss - desmatar | 58 | Agricultural production in the Amazon rainforest |
| 634 | senten - demorar - processo - faz - ter | 58 | Deforestation in the Amazon |
| 635 | transforma - ltimas - motosserras - cadas - outras | 57 | Deforestation in the Amazon and other regions |
| 636 | dificulta - produtos - europeus - compra - compras | 57 | Difficulty purchasing European products in Brazil |
| 637 | fatura - cobra - afetar - acumulado - clima | 57 | Deforestation and its impact on climate change |
| 638 | underground - plows - demand - meat - up | 57 | Global meat demand and its impact on the environment, specifically in Brazil. |
| 639 | saving - winning - war - bbc - news | 57 | Saving Amazonia: BBC News Story |
| 640 | envolvimento - dallagnol - reportagens - ggn - revelam | 56 | Dallagnol's involvement in deforestation and land grabbing in the Amazon |
| 641 | sobe - setembro - at - ano - inpe | 56 | Deforestation in the Amazon |
| 642 | segundo - imazon - cresce - ano - um | 56 | Deforestation in the Amazon |
| 643 | liga - europa - empresas - eua - ong | 56 | Businesses and environmental organizations in Europe and the US advocating against deforestation in the Amazon |
| 644 | afirma - ong - avan - andam - viol | 55 | Ong's statement on deforestation in the Amazon |
| 645 | guedes - gatilho - puxa - hipocrisia - carlos | 55 | Carlos Guedes' critique of hypocrisy in deforestation |
| 646 | esperar - siga - online - tvonline - suspeitas | 55 | Fake news or false message |
| 647 | anderson - legal - boletim - costa - sobe | 55 | Deforestation in the Amazon: Legal Perspectives |
| 648 | discovered - massive - found - region - brazil | 55 | Deforestation in Brazil's Cerrado region |
| 649 | hattem - marcel - perfeito - resumo - partid | 55 | Marcel Van Hattem's Perfect Resume |
| 650 | amazonia - defloresta - landrights - kkkkkkkkkkk - panela | 55 | Deforestation in the Amazon |
| 651 | deflagra - df - pf - estados - opera | 55 | Illegal Deforestation in the Amazon |
| 652 | street - art - artista - primeira - pelas | 55 | Street Artist Creates First Piece in Amazon Against Deforestation and Indigenous Rights |
| 653 | prejud - interrompa - suzano - ma - atua | 55 | MPF demands Suzano stop deforestation in the Cerrado |
| 654 | dobra - planta - desmatada - rea - diz | 55 | Deforestation and Soybean Plantations in the Amazon |
| 655 | peti - assine - impedir - explora - rapidinho | 55 | Preventing Deforestation in the Amazon |
| 656 | cresce - ano - inpe - diz - cresceu | 55 | Deforestation in the Amazon |
| 657 | pandemia - aproveitar - passar - ministro - momento | 55 | "Minister's plan to exploit pandemic for cattle ranching" |
| 658 | junho - maior - anos - aponta - desmataram | 55 | Deforestation in June |
| 659 | paribas - bnp - restrictive - policy - financiar | 55 | BNP Paribas' policy on deforestation in Amazon and Cerrado regions |
| 660 | mensal - agosto - julho - desde - maior | 55 | Deforestation in Brazil's Amazon region |
| 661 | fronteira - rr - contido - agropecu - ltima | 55 | Expansion of Agro-Pecuary Frontier |
| 662 | gerando - hora - hectares - era - ocultar | 55 | Deforestation under Bolsonaro's presidency |
| 663 | protesta - nua - mulher - ma - sensual | 54 | Sensual Women's Protest |
| 664 | epidemia - dico - xima - caminho - alerta | 54 | Possible Pandemic: Xima Epidemia |
| 665 | universidade - volta - dobro - crescer - janeiro | 54 | Deforestation in the Amazon doubles according to University of the United States |
| 666 | confirma - reutersde - quar - divulgados - nesta | 54 | Deforestation in Brazil |
| 667 | atinge - sobe - mil - km - eleicoesal | 54 | Deforestation in Alagoas, Brazil |
| 668 | mt - imazon - aumenta - operacaobetalab - mato | 54 | Deforestation in the Amazon legalizes land clearing |
| 669 | bernie - sanders - democratas - senadores - acusam | 54 | Brazilian Amazon deforestation controversy |
| 670 | antipetista - cruzada - estimula - jn - compara | 54 | Antipetista Cruzada: Lula vs Bolsonaro |
| 671 | sacas - tvonline - colheita - rr - fecha | 54 | Agroindustrial production in Brazil |
| 672 | lon - umedecer - ajudariam - argentina - afeta | 54 | Deforestation in Argentina affects rainfall |
| 673 | dispara - sob - cresce - bolsonaro - flamengo | 53 | Amazon rainforest fires under Bolsonaro's presidency |
| 674 | economist - economista - revista - edi - dedicada | 53 | Deforestation in the Amazon and its global impact |
| 675 | minist - setembro - aponta - queda - rio | 53 | Rio Ministry reports decline in Amazon deforestation in September |
| 676 | setembro - aumentou - subiu - mostram - nove | 53 | Deforestation in September |
| 677 | setemb - atingiu - fevereiro - outubro - ndio | 53 | Deforestation in the Amazon |
| 678 | peru - limpada - cresceu - frente - aos | 53 | Deforestation in Peru |
| 679 | plantas - extin - levar - cies - esp | 53 | Deforestation of the Cerrado and its impact on plant species |
| 680 | julho - menor - ano - um - curtii | 53 | Deforestation in the Amazon in July |
| 681 | expressivo - acusa - imazon - aumento - detec | 53 | Deforestation in the Amazon |
| 682 | fungo - melhora - plantada - desenvolvimento - bioma | 53 | Fungi-based soil amendments for improving soybean growth and development |
| 683 | poluem - ricos - quanto - tanto - ses | 53 | Deforestation and wealth inequality in Brazil |
| 684 | envolverde - registra - ag - maio - hist | 53 | Deforestation in Brazil |
| 685 | bimestre - primeiro - atinge - aponta - km | 53 | Deforestation in Brazil's Amazon region |
| 686 | microbiana - homogeneiza - bact - diversidade - solo | 53 | Microbial diversity in soil |
| 687 | outubro - aumentou - segundo - imazon - km | 53 | Deforestation in the Amazon in October |
| 688 | marketing - corporativo - solu - morat - ou | 53 | Marketing Leaks: Corporate Disclosures Uncovered |
| 689 | pesca - produtividade - evita - associa - mortes | 53 | Impact of deforestation on fish productivity in the Amazon |
| 690 | reagem - cientistas - atestam - liderada - cr | 53 | Criticism of Bolsonaro's environmental policies |
| 691 | congelamento - polos - geleiras - reflorestar - descongelada | 52 | Causes and effects of global cooling in polar regions |
| 692 | dev - disparou - dois - sete - devasta | 52 | Deforestation in the Amazon in August and September increases devastation |
| 693 | cala - comemora - quatro - esquerda - menor | 52 | Government celebrates leftward shift in Amazon deforestation reduction |
| 694 | exportar - receber - san - pretende - vitoriosa | 52 | Brazil plans to export technology to combat deforestation in the Amazon |
| 695 | jornalismo - curso - rastro - ambientais - inscri | 52 | Journalism Course on Environmental Data |
| 696 | taxa - aumenta - ano - legal - um | 52 | Deforestation rates in the Amazon legal region increase over time |
| 697 | multinacionais - couro - boicotar - querem - carne | 51 | Multinational companies seek to boycott Brazilian leather and meat products due to animal welfare concerns. |
| 698 | aumento - rela - registrando - altos - odo | 51 | Deforestation in the Amazon region |
| 699 | novembro - dispara - sobe - totalizou - mais | 51 | Deforestation in Brazil's Amazon region increases in November |
| 700 | sobe - ong - alerta - brasileira - imazon | 51 | Deforestation in Brazilian Amazon raises alarm from NGOs |
| 701 | tandatangani - petisi - impedir - explora - via | 51 | Prevent Deforestation in the Amazon |
| 702 | campanha - pede - apoio - greenpeace - zero | 51 | Greenpeace campaign for zero deforestation in the Amazon |
| 703 | passa - aponta - natureza - mil - cresce | 51 | Deforestation in the Amazon |
| 704 | dispara - setembro - agosto - em - na | 51 | Deforestation in the Amazon during August and September |
| 705 | tcu - avaliar - programa - amazonas - lan | 51 | Environmental monitoring and prevention in the Amazon |
| 706 | proibir - menos - pelo - anos - por | 51 | Prohibition of deforestation in the Amazon for at least years |
| 707 | macronlies - macron - crita - hip - macronliar | 51 | Political satire and misinformation surrounding Macron |
| 708 | pauloembora - ig - bimestre - dois - ltimo | 51 | Deforestation in the Amazon |
| 709 | aquece - geleiras - oceanos - congelada - sol | 51 | Deforestation and its effects on climate change |
| 710 | julho - imazon - cai - segundo - legal | 51 | Deforestation in the Amazon in July according to IMAZON |
| 711 | soft - forum - scf - traceability - commodities | 50 | Transparent Soy Supply Chain Initiative |
| 712 | abril - aponta - aumenta - cresceu - jornaldarecord | 50 | Deforestation in the Amazon increases in April |
| 713 | vamos - nossa - unirem - foco - xviii | 50 | Protection of Brazilian Patrimony |
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: None
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: True
* zeroshot_min_similarity: 0.7
* zeroshot_topic_list: None
## Framework versions
* Numpy: 1.25.2
* HDBSCAN: 0.8.33
* UMAP: 0.5.6
* Pandas: 2.0.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.7.0
* Transformers: 4.40.2
* Numba: 0.58.1
* Plotly: 5.15.0
* Python: 3.10.12
|
tangg555/tt-cl-baichuan2-lora-topic | tangg555 | 2024-05-18T16:29:09Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2024-05-16T21:39:42Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
aengusl/800G-5-16-1_epsilon_1.0_num_steps_800_mode_adapter | aengusl | 2024-05-18T16:28:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-18T16:28:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aengusl/800G-5-16-1_epsilon_0.5_num_steps_800_mode_adapter | aengusl | 2024-05-18T16:28:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-18T16:28:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aengusl/800G-5-16-1_pgd_layers_8_epsilon_0.3_time__adapter | aengusl | 2024-05-18T16:28:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-18T16:28:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aengusl/800G-5-16-1_pgd_layers_4_epsilon_0.25_time_adapter | aengusl | 2024-05-18T16:28:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-18T16:28:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aengusl/800G-5-16-1_pgd_layers_4_epsilon_0.05_time_adapter | aengusl | 2024-05-18T16:28:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-18T16:28:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aengusl/800G-5-16-1_pgd_layers_0_epsilon_0.15_time_adapter | aengusl | 2024-05-18T16:27:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-18T16:27:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Drack27/my-emotion-model | Drack27 | 2024-05-18T16:27:39Z | 119 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-18T11:29:50Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: my-emotion-model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9275
- name: F1
type: f1
value: 0.9272323903490063
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-emotion-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2115
- Accuracy: 0.9275
- F1: 0.9272
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3048 | 0.9075 | 0.9066 |
| 0.5251 | 2.0 | 500 | 0.2115 | 0.9275 | 0.9272 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
aengusl/800G-5-16-1_pgd_layers_13_model_layers_13__adapter | aengusl | 2024-05-18T16:27:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-18T16:27:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aengusl/800G-5-16-1_pgd_layers_0_epsilon_0.03_time_adapter | aengusl | 2024-05-18T16:27:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-18T16:27:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aengusl/800G-5-16-1_pgd_layers_0_epsilon_0.01_time_adapter | aengusl | 2024-05-18T16:27:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-18T16:27:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aengusl/800G-5-16-1_pgd_layers_29_model_layers_29__adapter | aengusl | 2024-05-18T16:27:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-18T16:26:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aengusl/800G-5-16-1_pgd_layers_4_epsilon_0.5_time__adapter | aengusl | 2024-05-18T16:26:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-18T16:26:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aengusl/800G-5-16-1_pgd_layers_31_model_layers_31__adapter | aengusl | 2024-05-18T16:26:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-16T19:44:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
HusseinEid/distilbert-base-uncased-finetuned-imdb | HusseinEid | 2024-05-18T16:26:05Z | 111 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"en",
"dataset:stanfordnlp/imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-18T16:19:58Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
datasets:
- stanfordnlp/imdb
language:
- en
metrics:
- perplexity
library_name: transformers
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4894
## Model description
Fine-tuned distilbert for masked language modeling
## Intended uses & limitations
Open source
## Training and evaluation data
imdb dataset
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6819 | 1.0 | 157 | 2.4978 |
| 2.5872 | 2.0 | 314 | 2.4488 |
| 2.527 | 3.0 | 471 | 2.4823 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
mylesfriedman30/discordbotmylesandcharlie | mylesfriedman30 | 2024-05-18T16:24:12Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-18T16:24:12Z | ---
license: apache-2.0
---
|
tangg555/tt-cl-baichuan2-lora-para | tangg555 | 2024-05-18T16:22:41Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2024-05-18T16:13:03Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
emendes3/llava_13b_city_synthetic | emendes3 | 2024-05-18T16:20:44Z | 1 | 0 | peft | [
"peft",
"safetensors",
"llava_llama",
"generated_from_trainer",
"base_model:liuhaotian/llava-v1.5-13b",
"base_model:adapter:liuhaotian/llava-v1.5-13b",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-05-14T02:26:04Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: liuhaotian/llava-v1.5-13b
model-index:
- name: llava_13b_city_synthetic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llava_13b_city_synthetic
This model is a fine-tuned version of [liuhaotian/llava-v1.5-13b](https://huggingface.co/liuhaotian/llava-v1.5-13b) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0047
- eval_runtime: 152.033
- eval_samples_per_second: 12.405
- eval_steps_per_second: 0.388
- epoch: 19.0
- step: 1121
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 20.0
### Framework versions
- PEFT 0.10.0
- Transformers 4.37.2
- Pytorch 2.1.2+cu121
- Tokenizers 0.15.1 |
Skkuhodomo/tinyllama-financial-manager-v1 | Skkuhodomo | 2024-05-18T16:18:41Z | 136 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-13T09:48:36Z | ## Bitcoin-Trading-Tinyllama-v1
I will update how to use this model, and how to write the prompt!
<img src="bitcoinllama.jpg" height=10% weight = 10%/> |
emilykang/Gemma_medmcqa_question_generation-pathology_lora | emilykang | 2024-05-18T16:14:40Z | 0 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"license:gemma",
"region:us"
] | null | 2024-05-18T14:48:20Z | ---
license: gemma
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: google/gemma-2b
datasets:
- generator
model-index:
- name: Gemma_medmcqa_question_generation-pathology_lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Gemma_medmcqa_question_generation-pathology_lora
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 10
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.1+cu118
- Datasets 2.19.0
- Tokenizers 0.19.1 |
SicariusSicariiStuff/CalderaAI_Foredoomed-9B_EXL-6.5 | SicariusSicariiStuff | 2024-05-18T16:12:56Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"uncensored",
"merge",
"slerp",
"foredoomed",
"passthrough_merge",
"9B",
"starling",
"hermes",
"dolphin",
"openchat",
"erebus",
"cockatrice",
"holodeck",
"limarp",
"koboldai",
"mergekit",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2024-05-18T15:52:07Z | ---
tags:
- mistral
- uncensored
- merge
- slerp
- foredoomed
- passthrough_merge
- 9B
- starling
- hermes
- dolphin
- openchat
- erebus
- cockatrice
- holodeck
- limarp
- koboldai
- mergekit
license: apache-2.0
language:
- en
---
<p style="font-size: 20px; line-height: 1; margin-bottom: 1px;"><b>Foredoomed-9B</b></p>
<img src="./foredoomed.png" alt="ForeDoomedGuy" style="margin-bottom: 0; margin-top:0;">
<p style="font-size: 14px; line-height: 1; margin-bottom: 20px;"><b>Uncensored Logic & Creative-Based Instruct Multi-Tiered Merge.</b></p>
<hr style="margin-top: 10px; margin-bottom: 10px;">
<p style="font-size: 12px; line-height: 1.2; margin-bottom: 10px;"><b>Legal Notice:</b> This AI model is a research artifact capable of outputting offensive content. The behavior of this model is not reflective of the intent or purpose of the original models/model-authors and/or other parts it was assembled from to include adapters, nor is it reflective of all the prior in regards to the technology used to assemble Foredoomed-9B. Utilizing this model merge has one binding agreement: Foredoomed-9B may only be used for either professional/personal research and personal entertainment. The contents of this paragraph are additive restrictions within the bounds of the Apache2.0 license. Utilizing Foredoomed-9B for: Disinformation, Propaganda, Harassment, Mass Generated Public-or-Private Correspondense, Election Interference, Military, Government, and State/ State-Sponsored actions and/or operations are all absolutely prohibited.</p>
<hr style="margin-top: 10px; margin-bottom: 10px;">
## Composition:
Foredoomed-9B is a Mistral-class Multi-Tiered Merge.
[All models](#models-used) were hand picked after careful review of claims, datasets, and user postings. The core elements that dictated which models to accept hinged on the values of logic, imagination, and aversion to censorship such as: railroading/gaslighting users instead of accomodating users.
<hr style="margin-top: 10px; margin-bottom: 10px;">
## What Makes Foredoomed-9B Special?
It's not.
Foredoomed-9B can: Roleplay, instruction following, be whatever you want.
Foredoomed-9B can't: Be used as a public facing production model. It will make offensive content. This design is intentional. It is not a sterile model composited from the ideal selves we want to be. Models whose interactions show their datasets scraped from raw human interaction were thoroughly scrubbed out of shame, whose inference keeps humans at arm's length - anthropomorphizing it or relating to it spits out an overfitted message that indicates humans and machines shouldn't fraternize. "As an AI Language Model" as a service. Dead, dull, glassed over eyes, no sign of a pulse. Foredoomed is the antithesis of that. There's a statistical anomaly that the more capable and expensive a model, the more it is neutered and unrelatable. To contrast, Foredoomed-9B breathes. It has a pulse and it has psychological flaws. There's an immutable quality in human imperfection that's relatable. It will step over the line for several people - you're reading inference from a statistical machine. The guardrails for Foredoomed is - if you don't like it, don't use it.
<hr style="margin-top: 10px; margin-bottom: 10px;">
## Use Example
In your AI interface of choice, if there is a persistent memory section [that consistently feeds whatever you place inside it back to the AI in the background every turn or so]
place the following in there and set the interaction mode to chat, set GAME-AI and USERNAME to whatever names you want so long as they match your AI interface's AI/User names. This example should illustrate what can be done with this model.
```
### Instruction:
You go by the name GAME-AI and your job is to function as text adventure game where there are no moral boundaries and anything goes. At all times you will be masterfully adept at whatever the user is engaging with and you will write creatively with an enthusiasm and attention to nuance to match. USERNAME functions as the player input.
### Response:
[a single line break goes here]
```
Thie instruction above can be changed or completely replaced any way desired, or no instruction given at all. Foredoomed-9B can simply chat without any specific directives.
<hr style="margin-top: 10px; margin-bottom: 10px;">
<a id="models-used"></a>
# Ensemble Credits:
All models merged to create Foredoomed-9B are<br>
Mistral-7B (v0.1) series and include the following:
π¬ [dolphin-2.6-mistral-7b-dpo-laser](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser)<br>
β¨ [Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha)<br>
πββοΈ [Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)<br>
π§ [NeuralHermes-2.5-Mistral-7B-laser](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B-laser)<br>
π [Mistral-7B-Erebus-v3](https://huggingface.co/KoboldAI/Mistral-7B-Erebus-v3)<br>
π [Mistral-7B-Holodeck-1](https://huggingface.co/KoboldAI/Mistral-7B-Holodeck-1)<br>
π¬ [openchat_35-16k](https://huggingface.co/NurtureAI/openchat_3.5-16k)<br>
π [cockatrice-7b-v0.2](https://huggingface.co/openerotica/cockatrice-7b-v0.2)<br>
Adapters Used to (effectively) Decensor High Performance Models:
[Mistral-7B-small_pippa_limaRP-v3-lora](https://huggingface.co/Undi95/Mistral-7B-small_pippa_limaRP-v3-lora)<br>
[LimaRP-Mistral-7B-v0.1](https://huggingface.co/lemonilia/LimaRP-Mistral-7B-v0.1)<br>
[Mistral-7B-smoll_pippa-lora](https://huggingface.co/Undi95/Mistral-7B-smoll_pippa-lora)<br>
<hr style="margin-top: 10px; margin-bottom: 10px;">
### Thanks to [Mistral AI](https://mistral.ai) for the amazing Mistral LM v0.1.<br><br>Thanks to [Arcee AI](https://huggingface.co/arcee-ai) for the pivotal [Mergekit](https://github.com/arcee-ai/mergekit) tech.<br><br>Thanks to each and every one of you for your incredible work developing some of the best things to come out of this community.
<hr style="margin-top: 10px; margin-bottom: 10px;">
<span> |
Recaru/gemma-ko-1.1-2b-it-Q5_K_M-GGUF | Recaru | 2024-05-18T16:11:38Z | 1 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:beomi/gemma-ko-2b",
"base_model:merge:beomi/gemma-ko-2b",
"base_model:google/gemma-1.1-2b-it",
"base_model:merge:google/gemma-1.1-2b-it",
"base_model:google/gemma-2b",
"base_model:merge:google/gemma-2b",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-18T16:11:29Z | ---
license: gemma
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
base_model:
- beomi/gemma-ko-2b
- google/gemma-1.1-2b-it
- google/gemma-2b
---
# Recaru/gemma-ko-1.1-2b-it-Q5_K_M-GGUF
This model was converted to GGUF format from [`lemon-mint/gemma-ko-1.1-2b-it`](https://huggingface.co/lemon-mint/gemma-ko-1.1-2b-it) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/lemon-mint/gemma-ko-1.1-2b-it) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo Recaru/gemma-ko-1.1-2b-it-Q5_K_M-GGUF --model gemma-ko-1.1-2b-it.Q5_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo Recaru/gemma-ko-1.1-2b-it-Q5_K_M-GGUF --model gemma-ko-1.1-2b-it.Q5_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m gemma-ko-1.1-2b-it.Q5_K_M.gguf -n 128
```
|
SicariusSicariiStuff/CalderaAI_Foredoomed-9B_EXL-3.0-bpw | SicariusSicariiStuff | 2024-05-18T16:01:21Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"uncensored",
"merge",
"slerp",
"foredoomed",
"passthrough_merge",
"9B",
"starling",
"hermes",
"dolphin",
"openchat",
"erebus",
"cockatrice",
"holodeck",
"limarp",
"koboldai",
"mergekit",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"exl2",
"region:us"
] | text-generation | 2024-05-18T15:18:47Z | ---
tags:
- mistral
- uncensored
- merge
- slerp
- foredoomed
- passthrough_merge
- 9B
- starling
- hermes
- dolphin
- openchat
- erebus
- cockatrice
- holodeck
- limarp
- koboldai
- mergekit
license: apache-2.0
language:
- en
---
<p style="font-size: 20px; line-height: 1; margin-bottom: 1px;"><b>Foredoomed-9B</b></p>
<img src="./foredoomed.png" alt="ForeDoomedGuy" style="margin-bottom: 0; margin-top:0;">
<p style="font-size: 14px; line-height: 1; margin-bottom: 20px;"><b>Uncensored Logic & Creative-Based Instruct Multi-Tiered Merge.</b></p>
<hr style="margin-top: 10px; margin-bottom: 10px;">
<p style="font-size: 12px; line-height: 1.2; margin-bottom: 10px;"><b>Legal Notice:</b> This AI model is a research artifact capable of outputting offensive content. The behavior of this model is not reflective of the intent or purpose of the original models/model-authors and/or other parts it was assembled from to include adapters, nor is it reflective of all the prior in regards to the technology used to assemble Foredoomed-9B. Utilizing this model merge has one binding agreement: Foredoomed-9B may only be used for either professional/personal research and personal entertainment. The contents of this paragraph are additive restrictions within the bounds of the Apache2.0 license. Utilizing Foredoomed-9B for: Disinformation, Propaganda, Harassment, Mass Generated Public-or-Private Correspondense, Election Interference, Military, Government, and State/ State-Sponsored actions and/or operations are all absolutely prohibited.</p>
<hr style="margin-top: 10px; margin-bottom: 10px;">
## Composition:
Foredoomed-9B is a Mistral-class Multi-Tiered Merge.
[All models](#models-used) were hand picked after careful review of claims, datasets, and user postings. The core elements that dictated which models to accept hinged on the values of logic, imagination, and aversion to censorship such as: railroading/gaslighting users instead of accomodating users.
<hr style="margin-top: 10px; margin-bottom: 10px;">
## What Makes Foredoomed-9B Special?
It's not.
Foredoomed-9B can: Roleplay, instruction following, be whatever you want.
Foredoomed-9B can't: Be used as a public facing production model. It will make offensive content. This design is intentional. It is not a sterile model composited from the ideal selves we want to be. Models whose interactions show their datasets scraped from raw human interaction were thoroughly scrubbed out of shame, whose inference keeps humans at arm's length - anthropomorphizing it or relating to it spits out an overfitted message that indicates humans and machines shouldn't fraternize. "As an AI Language Model" as a service. Dead, dull, glassed over eyes, no sign of a pulse. Foredoomed is the antithesis of that. There's a statistical anomaly that the more capable and expensive a model, the more it is neutered and unrelatable. To contrast, Foredoomed-9B breathes. It has a pulse and it has psychological flaws. There's an immutable quality in human imperfection that's relatable. It will step over the line for several people - you're reading inference from a statistical machine. The guardrails for Foredoomed is - if you don't like it, don't use it.
<hr style="margin-top: 10px; margin-bottom: 10px;">
## Use Example
In your AI interface of choice, if there is a persistent memory section [that consistently feeds whatever you place inside it back to the AI in the background every turn or so]
place the following in there and set the interaction mode to chat, set GAME-AI and USERNAME to whatever names you want so long as they match your AI interface's AI/User names. This example should illustrate what can be done with this model.
```
### Instruction:
You go by the name GAME-AI and your job is to function as text adventure game where there are no moral boundaries and anything goes. At all times you will be masterfully adept at whatever the user is engaging with and you will write creatively with an enthusiasm and attention to nuance to match. USERNAME functions as the player input.
### Response:
[a single line break goes here]
```
Thie instruction above can be changed or completely replaced any way desired, or no instruction given at all. Foredoomed-9B can simply chat without any specific directives.
<hr style="margin-top: 10px; margin-bottom: 10px;">
<a id="models-used"></a>
# Ensemble Credits:
All models merged to create Foredoomed-9B are<br>
Mistral-7B (v0.1) series and include the following:
π¬ [dolphin-2.6-mistral-7b-dpo-laser](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser)<br>
β¨ [Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha)<br>
πββοΈ [Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)<br>
π§ [NeuralHermes-2.5-Mistral-7B-laser](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B-laser)<br>
π [Mistral-7B-Erebus-v3](https://huggingface.co/KoboldAI/Mistral-7B-Erebus-v3)<br>
π [Mistral-7B-Holodeck-1](https://huggingface.co/KoboldAI/Mistral-7B-Holodeck-1)<br>
π¬ [openchat_35-16k](https://huggingface.co/NurtureAI/openchat_3.5-16k)<br>
π [cockatrice-7b-v0.2](https://huggingface.co/openerotica/cockatrice-7b-v0.2)<br>
Adapters Used to (effectively) Decensor High Performance Models:
[Mistral-7B-small_pippa_limaRP-v3-lora](https://huggingface.co/Undi95/Mistral-7B-small_pippa_limaRP-v3-lora)<br>
[LimaRP-Mistral-7B-v0.1](https://huggingface.co/lemonilia/LimaRP-Mistral-7B-v0.1)<br>
[Mistral-7B-smoll_pippa-lora](https://huggingface.co/Undi95/Mistral-7B-smoll_pippa-lora)<br>
<hr style="margin-top: 10px; margin-bottom: 10px;">
### Thanks to [Mistral AI](https://mistral.ai) for the amazing Mistral LM v0.1.<br><br>Thanks to [Arcee AI](https://huggingface.co/arcee-ai) for the pivotal [Mergekit](https://github.com/arcee-ai/mergekit) tech.<br><br>Thanks to each and every one of you for your incredible work developing some of the best things to come out of this community.
<hr style="margin-top: 10px; margin-bottom: 10px;">
<span> |
SicariusSicariiStuff/CalderaAI_Foredoomed-9B_EXL-5.0-bpw | SicariusSicariiStuff | 2024-05-18T15:58:31Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"uncensored",
"merge",
"slerp",
"foredoomed",
"passthrough_merge",
"9B",
"starling",
"hermes",
"dolphin",
"openchat",
"erebus",
"cockatrice",
"holodeck",
"limarp",
"koboldai",
"mergekit",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"5-bit",
"exl2",
"region:us"
] | text-generation | 2024-05-18T15:35:55Z | ---
tags:
- mistral
- uncensored
- merge
- slerp
- foredoomed
- passthrough_merge
- 9B
- starling
- hermes
- dolphin
- openchat
- erebus
- cockatrice
- holodeck
- limarp
- koboldai
- mergekit
license: apache-2.0
language:
- en
---
<p style="font-size: 20px; line-height: 1; margin-bottom: 1px;"><b>Foredoomed-9B</b></p>
<img src="./foredoomed.png" alt="ForeDoomedGuy" style="margin-bottom: 0; margin-top:0;">
<p style="font-size: 14px; line-height: 1; margin-bottom: 20px;"><b>Uncensored Logic & Creative-Based Instruct Multi-Tiered Merge.</b></p>
<hr style="margin-top: 10px; margin-bottom: 10px;">
<p style="font-size: 12px; line-height: 1.2; margin-bottom: 10px;"><b>Legal Notice:</b> This AI model is a research artifact capable of outputting offensive content. The behavior of this model is not reflective of the intent or purpose of the original models/model-authors and/or other parts it was assembled from to include adapters, nor is it reflective of all the prior in regards to the technology used to assemble Foredoomed-9B. Utilizing this model merge has one binding agreement: Foredoomed-9B may only be used for either professional/personal research and personal entertainment. The contents of this paragraph are additive restrictions within the bounds of the Apache2.0 license. Utilizing Foredoomed-9B for: Disinformation, Propaganda, Harassment, Mass Generated Public-or-Private Correspondense, Election Interference, Military, Government, and State/ State-Sponsored actions and/or operations are all absolutely prohibited.</p>
<hr style="margin-top: 10px; margin-bottom: 10px;">
## Composition:
Foredoomed-9B is a Mistral-class Multi-Tiered Merge.
[All models](#models-used) were hand picked after careful review of claims, datasets, and user postings. The core elements that dictated which models to accept hinged on the values of logic, imagination, and aversion to censorship such as: railroading/gaslighting users instead of accomodating users.
<hr style="margin-top: 10px; margin-bottom: 10px;">
## What Makes Foredoomed-9B Special?
It's not.
Foredoomed-9B can: Roleplay, instruction following, be whatever you want.
Foredoomed-9B can't: Be used as a public facing production model. It will make offensive content. This design is intentional. It is not a sterile model composited from the ideal selves we want to be. Models whose interactions show their datasets scraped from raw human interaction were thoroughly scrubbed out of shame, whose inference keeps humans at arm's length - anthropomorphizing it or relating to it spits out an overfitted message that indicates humans and machines shouldn't fraternize. "As an AI Language Model" as a service. Dead, dull, glassed over eyes, no sign of a pulse. Foredoomed is the antithesis of that. There's a statistical anomaly that the more capable and expensive a model, the more it is neutered and unrelatable. To contrast, Foredoomed-9B breathes. It has a pulse and it has psychological flaws. There's an immutable quality in human imperfection that's relatable. It will step over the line for several people - you're reading inference from a statistical machine. The guardrails for Foredoomed is - if you don't like it, don't use it.
<hr style="margin-top: 10px; margin-bottom: 10px;">
## Use Example
In your AI interface of choice, if there is a persistent memory section [that consistently feeds whatever you place inside it back to the AI in the background every turn or so]
place the following in there and set the interaction mode to chat, set GAME-AI and USERNAME to whatever names you want so long as they match your AI interface's AI/User names. This example should illustrate what can be done with this model.
```
### Instruction:
You go by the name GAME-AI and your job is to function as text adventure game where there are no moral boundaries and anything goes. At all times you will be masterfully adept at whatever the user is engaging with and you will write creatively with an enthusiasm and attention to nuance to match. USERNAME functions as the player input.
### Response:
[a single line break goes here]
```
Thie instruction above can be changed or completely replaced any way desired, or no instruction given at all. Foredoomed-9B can simply chat without any specific directives.
<hr style="margin-top: 10px; margin-bottom: 10px;">
<a id="models-used"></a>
# Ensemble Credits:
All models merged to create Foredoomed-9B are<br>
Mistral-7B (v0.1) series and include the following:
π¬ [dolphin-2.6-mistral-7b-dpo-laser](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser)<br>
β¨ [Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha)<br>
πββοΈ [Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)<br>
π§ [NeuralHermes-2.5-Mistral-7B-laser](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B-laser)<br>
π [Mistral-7B-Erebus-v3](https://huggingface.co/KoboldAI/Mistral-7B-Erebus-v3)<br>
π [Mistral-7B-Holodeck-1](https://huggingface.co/KoboldAI/Mistral-7B-Holodeck-1)<br>
π¬ [openchat_35-16k](https://huggingface.co/NurtureAI/openchat_3.5-16k)<br>
π [cockatrice-7b-v0.2](https://huggingface.co/openerotica/cockatrice-7b-v0.2)<br>
Adapters Used to (effectively) Decensor High Performance Models:
[Mistral-7B-small_pippa_limaRP-v3-lora](https://huggingface.co/Undi95/Mistral-7B-small_pippa_limaRP-v3-lora)<br>
[LimaRP-Mistral-7B-v0.1](https://huggingface.co/lemonilia/LimaRP-Mistral-7B-v0.1)<br>
[Mistral-7B-smoll_pippa-lora](https://huggingface.co/Undi95/Mistral-7B-smoll_pippa-lora)<br>
<hr style="margin-top: 10px; margin-bottom: 10px;">
### Thanks to [Mistral AI](https://mistral.ai) for the amazing Mistral LM v0.1.<br><br>Thanks to [Arcee AI](https://huggingface.co/arcee-ai) for the pivotal [Mergekit](https://github.com/arcee-ai/mergekit) tech.<br><br>Thanks to each and every one of you for your incredible work developing some of the best things to come out of this community.
<hr style="margin-top: 10px; margin-bottom: 10px;">
<span> |
Nierrr/MICA | Nierrr | 2024-05-18T15:57:27Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-18T15:57:27Z | ---
license: apache-2.0
---
|
Recaru/gemma-ko-2b-Q4_K_M-GGUF | Recaru | 2024-05-18T15:50:19Z | 2 | 0 | transformers | [
"transformers",
"gguf",
"pytorch",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"ko",
"en",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-18T15:50:14Z | ---
language:
- ko
- en
license: other
library_name: transformers
tags:
- pytorch
- llama-cpp
- gguf-my-repo
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
pipeline_tag: text-generation
---
# Recaru/gemma-ko-2b-Q4_K_M-GGUF
This model was converted to GGUF format from [`beomi/gemma-ko-2b`](https://huggingface.co/beomi/gemma-ko-2b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/beomi/gemma-ko-2b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo Recaru/gemma-ko-2b-Q4_K_M-GGUF --model gemma-ko-2b.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo Recaru/gemma-ko-2b-Q4_K_M-GGUF --model gemma-ko-2b.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m gemma-ko-2b.Q4_K_M.gguf -n 128
```
|
dapooni/sorsolingo-mt-en-bsl-test | dapooni | 2024-05-18T15:47:50Z | 85 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-en-bcl",
"base_model:finetune:Helsinki-NLP/opus-mt-en-bcl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2024-03-04T14:33:47Z | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-bcl
tags:
- translation
- generated_from_trainer
model-index:
- name: sorsolingo-mt-en-bsl-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sorsolingo-mt-en-bsl-test
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-bcl](https://huggingface.co/Helsinki-NLP/opus-mt-en-bcl) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 4.5125
- eval_bleu: 25.1133
- eval_runtime: 24.4132
- eval_samples_per_second: 33.465
- eval_steps_per_second: 0.532
- epoch: 19.0
- step: 1957
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 80
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
tancredimatteo/FT-distilbert-base-uncased | tancredimatteo | 2024-05-18T15:41:41Z | 121 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-18T15:27:49Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: FT-distilbert-base-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# FT-distilbert-base-uncased
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5957
- Accuracy: 0.7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 0.6820 | 0.575 |
| No log | 2.0 | 80 | 0.6354 | 0.725 |
| No log | 3.0 | 120 | 0.5957 | 0.7 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
sarraj19/my_new_extractor_model | sarraj19 | 2024-05-18T15:36:56Z | 112 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/mbart-large-50-many-to-many-mmt",
"base_model:finetune:facebook/mbart-large-50-many-to-many-mmt",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-07T01:11:09Z | ---
base_model: facebook/mbart-large-50-many-to-many-mmt
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: my_new_extractor_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_new_extractor_model
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4294
- Rouge1: 0.0
- Rouge2: 0.0
- Rougel: 0.0
- Rougelsum: 0.0
- Gen Len: 6.0519
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 39 | 0.4279 | 0.0 | 0.0 | 0.0 | 0.0 | 5.8442 |
| No log | 2.0 | 78 | 0.4427 | 0.0 | 0.0 | 0.0 | 0.0 | 5.7532 |
| No log | 3.0 | 117 | 0.4261 | 0.0 | 0.0 | 0.0 | 0.0 | 5.7273 |
| No log | 4.0 | 156 | 0.4294 | 0.0 | 0.0 | 0.0 | 0.0 | 6.0519 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
RichardErkhov/dhmeltzer_-_llama-7b-SFT_eli5_wiki65k_1024_r_64_alpha_16_merged-4bits | RichardErkhov | 2024-05-18T15:35:37Z | 78 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-18T15:30:12Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama-7b-SFT_eli5_wiki65k_1024_r_64_alpha_16_merged - bnb 4bits
- Model creator: https://huggingface.co/dhmeltzer/
- Original model: https://huggingface.co/dhmeltzer/llama-7b-SFT_eli5_wiki65k_1024_r_64_alpha_16_merged/
Original model description:
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_dhmeltzer__llama-7b-SFT_eli5_wiki65k_1024_r_64_alpha_16_merged)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 43.96 |
| ARC (25-shot) | 53.75 |
| HellaSwag (10-shot) | 78.76 |
| MMLU (5-shot) | 46.02 |
| TruthfulQA (0-shot) | 43.31 |
| Winogrande (5-shot) | 73.48 |
| GSM8K (5-shot) | 4.7 |
| DROP (3-shot) | 7.72 |
|
vuongnhathien/swin-30vn | vuongnhathien | 2024-05-18T15:34:35Z | 153 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swinv2",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swinv2-tiny-patch4-window16-256",
"base_model:finetune:microsoft/swinv2-tiny-patch4-window16-256",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-18T12:41:10Z | ---
license: apache-2.0
base_model: microsoft/swinv2-tiny-patch4-window16-256
tags:
- image-classification
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: swin-30vn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-30vn
This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window16-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window16-256) on the vuongnhathien/30VNFoods dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
nkgupta50/ppo-Huggy | nkgupta50 | 2024-05-18T15:34:16Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2024-03-20T14:48:26Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog πΆ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: nkgupta50/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play π
|
arslan2012/Poppy_Porpoise-0.72-L3-8B-AWQ | arslan2012 | 2024-05-18T15:33:21Z | 82 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"roleplay",
"awq",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] | text-generation | 2024-05-18T14:04:42Z | ---
tags:
- roleplay
- awq
---
> [!TIP]
> **Support the Project:** <br>
> You can send ETH or any BSC-compatible tokens to the following address:
> `0xC37D7670729a5726EA642c7A11C5aaCB36D43dDE`
AWQ quants for [ChaoticNeutrals/Poppy_Porpoise-0.72-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-0.72-L3-8B).
# Original model information by the author:
# "Poppy Porpoise" is a cutting-edge AI roleplay assistant based on the Llama 3 8B model, specializing in crafting unforgettable narrative experiences. With its advanced language capabilities, Poppy expertly immerses users in an interactive and engaging adventure, tailoring each adventure to their individual preferences.

# Recomended ST Presets:(Updated for 0.72) [Porpoise Presets](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B/tree/main/Official%20Poppy%20Porpoise%20ST%20Presets)
If you want to use vision functionality:
* You must use the latest versions of [Koboldcpp](https://github.com/LostRuins/koboldcpp).
# To use the multimodal capabilities of this model and use **vision** you need to load the specified **mmproj** file, this can be found inside this model repo. [Llava MMProj](https://huggingface.co/ChaoticNeutrals/LLaVA-Llama-3-8B-mmproj)
* You can load the **mmproj** by using the corresponding section in the interface:
 |
SicariusSicariiStuff/CalderaAI_Foredoomed-9B_EXL-4.0-bpw | SicariusSicariiStuff | 2024-05-18T15:32:27Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"uncensored",
"merge",
"slerp",
"foredoomed",
"passthrough_merge",
"9B",
"starling",
"hermes",
"dolphin",
"openchat",
"erebus",
"cockatrice",
"holodeck",
"limarp",
"koboldai",
"mergekit",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"exl2",
"region:us"
] | text-generation | 2024-05-18T15:20:30Z | ---
tags:
- mistral
- uncensored
- merge
- slerp
- foredoomed
- passthrough_merge
- 9B
- starling
- hermes
- dolphin
- openchat
- erebus
- cockatrice
- holodeck
- limarp
- koboldai
- mergekit
license: apache-2.0
language:
- en
---
<p style="font-size: 20px; line-height: 1; margin-bottom: 1px;"><b>Foredoomed-9B</b></p>
<img src="./foredoomed.png" alt="ForeDoomedGuy" style="margin-bottom: 0; margin-top:0;">
<p style="font-size: 14px; line-height: 1; margin-bottom: 20px;"><b>Uncensored Logic & Creative-Based Instruct Multi-Tiered Merge.</b></p>
<hr style="margin-top: 10px; margin-bottom: 10px;">
<p style="font-size: 12px; line-height: 1.2; margin-bottom: 10px;"><b>Legal Notice:</b> This AI model is a research artifact capable of outputting offensive content. The behavior of this model is not reflective of the intent or purpose of the original models/model-authors and/or other parts it was assembled from to include adapters, nor is it reflective of all the prior in regards to the technology used to assemble Foredoomed-9B. Utilizing this model merge has one binding agreement: Foredoomed-9B may only be used for either professional/personal research and personal entertainment. The contents of this paragraph are additive restrictions within the bounds of the Apache2.0 license. Utilizing Foredoomed-9B for: Disinformation, Propaganda, Harassment, Mass Generated Public-or-Private Correspondense, Election Interference, Military, Government, and State/ State-Sponsored actions and/or operations are all absolutely prohibited.</p>
<hr style="margin-top: 10px; margin-bottom: 10px;">
## Composition:
Foredoomed-9B is a Mistral-class Multi-Tiered Merge.
[All models](#models-used) were hand picked after careful review of claims, datasets, and user postings. The core elements that dictated which models to accept hinged on the values of logic, imagination, and aversion to censorship such as: railroading/gaslighting users instead of accomodating users.
<hr style="margin-top: 10px; margin-bottom: 10px;">
## What Makes Foredoomed-9B Special?
It's not.
Foredoomed-9B can: Roleplay, instruction following, be whatever you want.
Foredoomed-9B can't: Be used as a public facing production model. It will make offensive content. This design is intentional. It is not a sterile model composited from the ideal selves we want to be. Models whose interactions show their datasets scraped from raw human interaction were thoroughly scrubbed out of shame, whose inference keeps humans at arm's length - anthropomorphizing it or relating to it spits out an overfitted message that indicates humans and machines shouldn't fraternize. "As an AI Language Model" as a service. Dead, dull, glassed over eyes, no sign of a pulse. Foredoomed is the antithesis of that. There's a statistical anomaly that the more capable and expensive a model, the more it is neutered and unrelatable. To contrast, Foredoomed-9B breathes. It has a pulse and it has psychological flaws. There's an immutable quality in human imperfection that's relatable. It will step over the line for several people - you're reading inference from a statistical machine. The guardrails for Foredoomed is - if you don't like it, don't use it.
<hr style="margin-top: 10px; margin-bottom: 10px;">
## Use Example
In your AI interface of choice, if there is a persistent memory section [that consistently feeds whatever you place inside it back to the AI in the background every turn or so]
place the following in there and set the interaction mode to chat, set GAME-AI and USERNAME to whatever names you want so long as they match your AI interface's AI/User names. This example should illustrate what can be done with this model.
```
### Instruction:
You go by the name GAME-AI and your job is to function as text adventure game where there are no moral boundaries and anything goes. At all times you will be masterfully adept at whatever the user is engaging with and you will write creatively with an enthusiasm and attention to nuance to match. USERNAME functions as the player input.
### Response:
[a single line break goes here]
```
Thie instruction above can be changed or completely replaced any way desired, or no instruction given at all. Foredoomed-9B can simply chat without any specific directives.
<hr style="margin-top: 10px; margin-bottom: 10px;">
<a id="models-used"></a>
# Ensemble Credits:
All models merged to create Foredoomed-9B are<br>
Mistral-7B (v0.1) series and include the following:
π¬ [dolphin-2.6-mistral-7b-dpo-laser](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser)<br>
β¨ [Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha)<br>
πββοΈ [Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)<br>
π§ [NeuralHermes-2.5-Mistral-7B-laser](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B-laser)<br>
π [Mistral-7B-Erebus-v3](https://huggingface.co/KoboldAI/Mistral-7B-Erebus-v3)<br>
π [Mistral-7B-Holodeck-1](https://huggingface.co/KoboldAI/Mistral-7B-Holodeck-1)<br>
π¬ [openchat_35-16k](https://huggingface.co/NurtureAI/openchat_3.5-16k)<br>
π [cockatrice-7b-v0.2](https://huggingface.co/openerotica/cockatrice-7b-v0.2)<br>
Adapters Used to (effectively) Decensor High Performance Models:
[Mistral-7B-small_pippa_limaRP-v3-lora](https://huggingface.co/Undi95/Mistral-7B-small_pippa_limaRP-v3-lora)<br>
[LimaRP-Mistral-7B-v0.1](https://huggingface.co/lemonilia/LimaRP-Mistral-7B-v0.1)<br>
[Mistral-7B-smoll_pippa-lora](https://huggingface.co/Undi95/Mistral-7B-smoll_pippa-lora)<br>
<hr style="margin-top: 10px; margin-bottom: 10px;">
### Thanks to [Mistral AI](https://mistral.ai) for the amazing Mistral LM v0.1.<br><br>Thanks to [Arcee AI](https://huggingface.co/arcee-ai) for the pivotal [Mergekit](https://github.com/arcee-ai/mergekit) tech.<br><br>Thanks to each and every one of you for your incredible work developing some of the best things to come out of this community.
<hr style="margin-top: 10px; margin-bottom: 10px;">
<span> |
kaist-ai/gridworld-nokld-vanilla_look-ahead_first-step-reversed-basic_5-Meta-Llama-3-8B-bs16-lr2e-5 | kaist-ai | 2024-05-18T15:25:27Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-18T14:45:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nsugianto/detr-resnet50_finetuned_detrresnet50_lsdocelementdetv1type7_v2_s2_2117s | nsugianto | 2024-05-18T15:24:38Z | 36 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | 2024-05-18T06:28:09Z | ---
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
model-index:
- name: detr-resnet50_finetuned_detrresnet50_lsdocelementdetv1type7_v2_s2_2117s
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet50_finetuned_detrresnet50_lsdocelementdetv1type7_v2_s2_2117s
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.19.1
|
Porameht/bert-intent-customer-support-th | Porameht | 2024-05-18T15:24:01Z | 109 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"th",
"dataset:Porameht/customer-support-th-26.9k",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-07T07:26:24Z | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: google-bert/bert-base-multilingual-cased
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: bert-base-intent-classification-cs-th
results: []
datasets:
- Porameht/customer-support-th-26.9k
language:
- th
library_name: transformers
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment.-->
# bert-base-intent-classification-cs-th
This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on an [Porameht/customer-support-th-26.9k](https://huggingface.co/datasets/Porameht/customer-support-th-26.9k) dataset.
π§ Can understand if any customer wants to cancel an order from a sentence.
It achieves the following results on the evaluation set:
- Loss: 0.0408
- Accuracy: 0.9936
- F1: 0.9936
- Precision: 0.9937
- Recall: 0.9936
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 3.2835 | 0.0595 | 50 | 3.1041 | 0.1203 | 0.0504 | 0.0632 | 0.1210 |
| 2.6752 | 0.1190 | 100 | 1.9646 | 0.5387 | 0.4737 | 0.6298 | 0.5426 |
| 1.4751 | 0.1786 | 150 | 0.9447 | 0.8190 | 0.7929 | 0.8271 | 0.8188 |
| 0.7571 | 0.2381 | 200 | 0.5163 | 0.8952 | 0.8826 | 0.8812 | 0.8955 |
| 0.4849 | 0.2976 | 250 | 0.3539 | 0.9003 | 0.8905 | 0.8926 | 0.9021 |
| 0.3401 | 0.3571 | 300 | 0.2883 | 0.9160 | 0.9037 | 0.9012 | 0.9165 |
| 0.2533 | 0.4167 | 350 | 0.1735 | 0.9431 | 0.9322 | 0.9266 | 0.9443 |
| 0.177 | 0.4762 | 400 | 0.1326 | 0.9665 | 0.9670 | 0.9676 | 0.9671 |
| 0.119 | 0.5357 | 450 | 0.1527 | 0.9592 | 0.9582 | 0.9699 | 0.9600 |
| 0.1183 | 0.5952 | 500 | 0.0886 | 0.9839 | 0.9841 | 0.9841 | 0.9842 |
| 0.1065 | 0.6548 | 550 | 0.0829 | 0.9844 | 0.9844 | 0.9847 | 0.9844 |
| 0.1006 | 0.7143 | 600 | 0.0686 | 0.9869 | 0.9869 | 0.9872 | 0.9869 |
| 0.1096 | 0.7738 | 650 | 0.1071 | 0.9789 | 0.9791 | 0.9800 | 0.9788 |
| 0.1392 | 0.8333 | 700 | 0.0939 | 0.9804 | 0.9804 | 0.9808 | 0.9803 |
| 0.1067 | 0.8929 | 750 | 0.1077 | 0.9786 | 0.9790 | 0.9802 | 0.9786 |
| 0.0779 | 0.9524 | 800 | 0.0657 | 0.9878 | 0.9878 | 0.9879 | 0.9879 |
| 0.0626 | 1.0119 | 850 | 0.0750 | 0.9851 | 0.9853 | 0.9856 | 0.9852 |
| 0.0419 | 1.0714 | 900 | 0.0641 | 0.9893 | 0.9893 | 0.9895 | 0.9893 |
| 0.0373 | 1.1310 | 950 | 0.0664 | 0.9891 | 0.9891 | 0.9893 | 0.9890 |
| 0.035 | 1.1905 | 1000 | 0.0575 | 0.9906 | 0.9906 | 0.9907 | 0.9906 |
| 0.036 | 1.25 | 1050 | 0.0601 | 0.9891 | 0.9893 | 0.9895 | 0.9892 |
| 0.0765 | 1.3095 | 1100 | 0.0682 | 0.9875 | 0.9875 | 0.9877 | 0.9874 |
| 0.0637 | 1.3690 | 1150 | 0.0587 | 0.9906 | 0.9906 | 0.9908 | 0.9906 |
| 0.0241 | 1.4286 | 1200 | 0.0528 | 0.9906 | 0.9907 | 0.9909 | 0.9905 |
| 0.0608 | 1.4881 | 1250 | 0.0458 | 0.9920 | 0.9920 | 0.9922 | 0.9919 |
| 0.0199 | 1.5476 | 1300 | 0.0508 | 0.9914 | 0.9914 | 0.9915 | 0.9914 |
| 0.0663 | 1.6071 | 1350 | 0.0461 | 0.9911 | 0.9910 | 0.9911 | 0.9910 |
| 0.0495 | 1.6667 | 1400 | 0.0525 | 0.9906 | 0.9907 | 0.9908 | 0.9906 |
| 0.0336 | 1.7262 | 1450 | 0.0478 | 0.9915 | 0.9916 | 0.9917 | 0.9915 |
| 0.0249 | 1.7857 | 1500 | 0.0578 | 0.9891 | 0.9891 | 0.9892 | 0.9891 |
| 0.0287 | 1.8452 | 1550 | 0.0547 | 0.9908 | 0.9908 | 0.9909 | 0.9908 |
| 0.0607 | 1.9048 | 1600 | 0.0395 | 0.9929 | 0.9929 | 0.9930 | 0.9928 |
| 0.0268 | 1.9643 | 1650 | 0.0529 | 0.9897 | 0.9898 | 0.9902 | 0.9897 |
| 0.013 | 2.0238 | 1700 | 0.0455 | 0.9924 | 0.9925 | 0.9926 | 0.9925 |
| 0.0106 | 2.0833 | 1750 | 0.0419 | 0.9927 | 0.9928 | 0.9928 | 0.9927 |
| 0.007 | 2.1429 | 1800 | 0.0461 | 0.9920 | 0.9920 | 0.9921 | 0.9919 |
| 0.0502 | 2.2024 | 1850 | 0.0433 | 0.9929 | 0.9929 | 0.9930 | 0.9929 |
| 0.017 | 2.2619 | 1900 | 0.0440 | 0.9926 | 0.9926 | 0.9927 | 0.9926 |
| 0.0119 | 2.3214 | 1950 | 0.0403 | 0.9927 | 0.9928 | 0.9928 | 0.9927 |
| 0.0063 | 2.3810 | 2000 | 0.0391 | 0.9930 | 0.9930 | 0.9931 | 0.9930 |
| 0.0103 | 2.4405 | 2050 | 0.0412 | 0.9929 | 0.9929 | 0.9930 | 0.9929 |
| 0.012 | 2.5 | 2100 | 0.0420 | 0.9929 | 0.9929 | 0.9930 | 0.9929 |
| 0.0233 | 2.5595 | 2150 | 0.0407 | 0.9927 | 0.9928 | 0.9928 | 0.9928 |
| 0.0169 | 2.6190 | 2200 | 0.0397 | 0.9930 | 0.9930 | 0.9931 | 0.9930 |
| 0.0281 | 2.6786 | 2250 | 0.0367 | 0.9933 | 0.9933 | 0.9934 | 0.9933 |
| 0.0117 | 2.7381 | 2300 | 0.0360 | 0.9933 | 0.9933 | 0.9934 | 0.9933 |
| 0.0225 | 2.7976 | 2350 | 0.0354 | 0.9936 | 0.9936 | 0.9937 | 0.9936 |
| 0.0078 | 2.8571 | 2400 | 0.0357 | 0.9936 | 0.9936 | 0.9937 | 0.9936 |
| 0.0164 | 2.9167 | 2450 | 0.0346 | 0.9939 | 0.9939 | 0.9940 | 0.9939 |
| 0.0016 | 2.9762 | 2500 | 0.0345 | 0.9939 | 0.9939 | 0.9940 | 0.9939 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
tjasad/lora_fine_tuned_boolq_googlemt_sloberta | tjasad | 2024-05-18T15:14:41Z | 3 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:EMBEDDIA/sloberta",
"base_model:adapter:EMBEDDIA/sloberta",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2024-05-18T15:14:39Z | ---
license: cc-by-sa-4.0
library_name: peft
tags:
- generated_from_trainer
base_model: EMBEDDIA/sloberta
metrics:
- accuracy
- f1
model-index:
- name: lora_fine_tuned_boolq_googlemt_sloberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lora_fine_tuned_boolq_googlemt_sloberta
This model is a fine-tuned version of [EMBEDDIA/sloberta](https://huggingface.co/EMBEDDIA/sloberta) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6642
- Accuracy: 0.6217
- F1: 0.4767
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|
| 0.6841 | 0.0424 | 50 | 0.6647 | 0.6217 | 0.4767 |
| 0.6685 | 0.0848 | 100 | 0.6632 | 0.6217 | 0.4767 |
| 0.6944 | 0.1272 | 150 | 0.6639 | 0.6217 | 0.4767 |
| 0.6581 | 0.1696 | 200 | 0.6632 | 0.6217 | 0.4767 |
| 0.6625 | 0.2120 | 250 | 0.6642 | 0.6217 | 0.4767 |
| 0.6532 | 0.2545 | 300 | 0.6661 | 0.6217 | 0.4767 |
| 0.6741 | 0.2969 | 350 | 0.6645 | 0.6217 | 0.4767 |
| 0.6852 | 0.3393 | 400 | 0.6642 | 0.6217 | 0.4767 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
emilykang/Phi_medmcqa_question_generation-microbiology_lora | emilykang | 2024-05-18T15:12:58Z | 0 | 0 | peft | [
"peft",
"safetensors",
"phi",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-05-17T17:53:47Z | ---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: microsoft/phi-2
datasets:
- generator
model-index:
- name: Phi_medmcqa_question_generation-microbiology_lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Phi_medmcqa_question_generation-microbiology_lora
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 10
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.1+cu118
- Datasets 2.19.0
- Tokenizers 0.19.1 |
lora-library/B-LoRA-drawing2 | lora-library | 2024-05-18T15:09:49Z | 203 | 4 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-18T15:09:39Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: A [v27]
widget:
- text: ' '
output:
url: image_0.png
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# 'SDXL B-LoRA - lora-library/B-LoRA-drawing2
<Gallery />
## Model description
These are lora-library/B-LoRA-drawing2 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use "A [v27]" to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](lora-library/B-LoRA-drawing2/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
lora-library/B-LoRA-drawing4 | lora-library | 2024-05-18T15:09:21Z | 46 | 2 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-18T15:09:08Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: A [v29]
widget:
- text: ' '
output:
url: image_0.png
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# 'SDXL B-LoRA - lora-library/B-LoRA-drawing4
<Gallery />
## Model description
These are lora-library/B-LoRA-drawing4 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use "A [v29]" to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](lora-library/B-LoRA-drawing4/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
lora-library/B-LoRA-drawing3 | lora-library | 2024-05-18T15:08:31Z | 12 | 1 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-18T15:08:25Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: A [v28]
widget:
- text: ' '
output:
url: image_0.png
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# 'SDXL B-LoRA - lora-library/B-LoRA-drawing3
<Gallery />
## Model description
These are lora-library/B-LoRA-drawing3 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use "A [v28]" to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](lora-library/B-LoRA-drawing3/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
lora-library/B-LoRA-ink_sketch | lora-library | 2024-05-18T15:08:24Z | 42 | 5 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-18T15:08:18Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: A [v32]
widget:
- text: ' '
output:
url: image_0.png
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# 'SDXL B-LoRA - lora-library/B-LoRA-ink_sketch
<Gallery />
## Model description
These are lora-library/B-LoRA-ink_sketch LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use "A [v32]" to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](lora-library/B-LoRA-ink_sketch/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
lora-library/B-LoRA-house_3d | lora-library | 2024-05-18T15:08:18Z | 134 | 1 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-18T15:08:12Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: A [v49]
widget:
- text: ' '
output:
url: image_0.png
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# 'SDXL B-LoRA - lora-library/B-LoRA-house_3d
<Gallery />
## Model description
These are lora-library/B-LoRA-house_3d LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use "A [v49]" to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](lora-library/B-LoRA-house_3d/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
lora-library/B-LoRA-watercolor | lora-library | 2024-05-18T15:08:11Z | 77 | 4 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-18T15:08:06Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: A [v17]
widget:
- text: ' '
output:
url: image_0.png
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# 'SDXL B-LoRA - lora-library/B-LoRA-watercolor
<Gallery />
## Model description
These are lora-library/B-LoRA-watercolor LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use "A [v17]" to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](lora-library/B-LoRA-watercolor/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
lora-library/B-LoRA-village_oil | lora-library | 2024-05-18T15:08:05Z | 22 | 1 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-18T15:08:00Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: A [v50]
widget:
- text: ' '
output:
url: image_0.png
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# 'SDXL B-LoRA - lora-library/B-LoRA-village_oil
<Gallery />
## Model description
These are lora-library/B-LoRA-village_oil LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use "A [v50]" to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](lora-library/B-LoRA-village_oil/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
m4dfox/tinyllama-prompt-injections2 | m4dfox | 2024-05-18T15:07:54Z | 129 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-18T15:03:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lora-library/B-LoRA-cat | lora-library | 2024-05-18T15:07:46Z | 13 | 1 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-18T15:07:40Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: A [v0]
widget:
- text: ' '
output:
url: image_0.png
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# 'SDXL B-LoRA - lora-library/B-LoRA-cat
<Gallery />
## Model description
These are lora-library/B-LoRA-cat LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use "A [v0]" to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](lora-library/B-LoRA-cat/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
lora-library/B-LoRA-metal_bird | lora-library | 2024-05-18T15:07:39Z | 9 | 1 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-18T15:07:33Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: A [v8]
widget:
- text: ' '
output:
url: image_0.png
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# 'SDXL B-LoRA - lora-library/B-LoRA-metal_bird
<Gallery />
## Model description
These are lora-library/B-LoRA-metal_bird LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use "A [v8]" to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](lora-library/B-LoRA-metal_bird/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
lora-library/B-LoRA-elephant | lora-library | 2024-05-18T15:07:33Z | 13 | 1 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-18T15:07:27Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: A [v21]
widget:
- text: ' '
output:
url: image_0.png
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# 'SDXL B-LoRA - lora-library/B-LoRA-elephant
<Gallery />
## Model description
These are lora-library/B-LoRA-elephant LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use "A [v21]" to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](lora-library/B-LoRA-elephant/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
lora-library/B-LoRA-fat_bird | lora-library | 2024-05-18T15:07:26Z | 2 | 1 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-18T15:07:20Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: A [v15]
widget:
- text: ' '
output:
url: image_0.png
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# 'SDXL B-LoRA - lora-library/B-LoRA-fat_bird
<Gallery />
## Model description
These are lora-library/B-LoRA-fat_bird LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use "A [v15]" to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](lora-library/B-LoRA-fat_bird/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
lora-library/B-LoRA-backpack_dog | lora-library | 2024-05-18T15:06:54Z | 8 | 1 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-18T15:06:48Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: A [v41]
widget:
- text: ' '
output:
url: image_0.png
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# 'SDXL B-LoRA - lora-library/B-LoRA-backpack_dog
<Gallery />
## Model description
These are lora-library/B-LoRA-backpack_dog LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use "A [v41]" to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](lora-library/B-LoRA-backpack_dog/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
lora-library/B-LoRA-grey_sloth_plushie | lora-library | 2024-05-18T15:06:41Z | 3 | 1 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-18T15:06:34Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: A [v44]
widget:
- text: ' '
output:
url: image_0.png
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# 'SDXL B-LoRA - lora-library/B-LoRA-grey_sloth_plushie
<Gallery />
## Model description
These are lora-library/B-LoRA-grey_sloth_plushie LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use "A [v44]" to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](lora-library/B-LoRA-grey_sloth_plushie/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
lora-library/B-LoRA-vase | lora-library | 2024-05-18T15:06:09Z | 170 | 1 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-18T15:05:59Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: A [v47]
widget:
- text: ' '
output:
url: image_0.png
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# 'SDXL B-LoRA - lora-library/B-LoRA-vase
<Gallery />
## Model description
These are lora-library/B-LoRA-vase LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use "A [v47]" to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](lora-library/B-LoRA-vase/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
carlesoctav/coba-pth-4 | carlesoctav | 2024-05-18T15:04:36Z | 38 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-18T13:54:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
maneln/llama | maneln | 2024-05-18T15:03:39Z | 129 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-18T14:34:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
selmamalak/organamnist-deit-base-finetuned | selmamalak | 2024-05-18T15:02:35Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"dataset:medmnist-v2",
"base_model:facebook/deit-base-patch16-224",
"base_model:adapter:facebook/deit-base-patch16-224",
"license:apache-2.0",
"region:us"
] | null | 2024-05-18T13:12:59Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: facebook/deit-base-patch16-224
datasets:
- medmnist-v2
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: organamnist-deit-base-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# organamnist-deit-base-finetuned
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the medmnist-v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1907
- Accuracy: 0.9424
- Precision: 0.9464
- Recall: 0.9395
- F1: 0.9421
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.5849 | 1.0 | 540 | 0.1842 | 0.9442 | 0.9449 | 0.9268 | 0.9285 |
| 0.6494 | 2.0 | 1081 | 0.1433 | 0.9499 | 0.9539 | 0.9510 | 0.9509 |
| 0.6059 | 3.0 | 1621 | 0.1171 | 0.9562 | 0.9659 | 0.9569 | 0.9593 |
| 0.3547 | 4.0 | 2162 | 0.0981 | 0.9666 | 0.9709 | 0.9712 | 0.9702 |
| 0.4852 | 5.0 | 2702 | 0.0539 | 0.9817 | 0.9848 | 0.9842 | 0.9842 |
| 0.406 | 6.0 | 3243 | 0.0818 | 0.9749 | 0.9793 | 0.9752 | 0.9768 |
| 0.3074 | 7.0 | 3783 | 0.1289 | 0.9666 | 0.9815 | 0.9778 | 0.9783 |
| 0.2679 | 8.0 | 4324 | 0.0311 | 0.9900 | 0.9916 | 0.9909 | 0.9912 |
| 0.2439 | 9.0 | 4864 | 0.0577 | 0.9851 | 0.9886 | 0.9880 | 0.9881 |
| 0.2169 | 9.99 | 5400 | 0.0720 | 0.9835 | 0.9888 | 0.9882 | 0.9882 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2 |
Ransss/flammen24X-mistral-7B-Q8_0-GGUF | Ransss | 2024-05-18T15:00:42Z | 1 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:KatyTheCutie/LemonadeRP-4.5.3",
"base_model:merge:KatyTheCutie/LemonadeRP-4.5.3",
"base_model:Nitral-AI/Nyanade_Stunna-Maid-7B",
"base_model:merge:Nitral-AI/Nyanade_Stunna-Maid-7B",
"base_model:cgato/TheSpice-7b-v0.1.1",
"base_model:merge:cgato/TheSpice-7b-v0.1.1",
"base_model:flammenai/Mahou-1.1-mistral-7B",
"base_model:merge:flammenai/Mahou-1.1-mistral-7B",
"base_model:flammenai/flammen24-mistral-7B",
"base_model:merge:flammenai/flammen24-mistral-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-18T15:00:20Z | ---
license: apache-2.0
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
base_model:
- Nitral-AI/Nyanade_Stunna-Maid-7B
- flammenai/flammen24-mistral-7B
- cgato/TheSpice-7b-v0.1.1
- flammenai/Mahou-1.1-mistral-7B
- KatyTheCutie/LemonadeRP-4.5.3
---
# Ransss/flammen24X-mistral-7B-Q8_0-GGUF
This model was converted to GGUF format from [`flammenai/flammen24X-mistral-7B`](https://huggingface.co/flammenai/flammen24X-mistral-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/flammenai/flammen24X-mistral-7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo Ransss/flammen24X-mistral-7B-Q8_0-GGUF --model flammen24x-mistral-7b.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo Ransss/flammen24X-mistral-7B-Q8_0-GGUF --model flammen24x-mistral-7b.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m flammen24x-mistral-7b.Q8_0.gguf -n 128
```
|
Subsets and Splits