modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-04 12:29:36
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 468
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-04 12:29:27
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ozzyonfire/bird-species-classifier | ozzyonfire | 2024-03-11T01:00:42Z | 150 | 0 | transformers | [
"transformers",
"onnx",
"safetensors",
"efficientnet",
"image-classification",
"biology",
"vision",
"en",
"dataset:chriamue/bird-species-dataset",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-03-10T18:57:02Z | ---
license: mit
datasets:
- chriamue/bird-species-dataset
language:
- en
metrics:
- accuracy
library_name: transformers
pipeline_tag: image-classification
tags:
- biology
- image-classification
- vision
model-index:
- name: bird-species-classifier
results:
- task:
type: ImageClassification
dataset:
type: chriamue/bird-species-dataset
name: Bird Species
config: default
split: validation
metrics:
- type: accuracy
value: 96.8
- type: loss
value: 0.1379
---
# Model Card for "Bird Species Classifier"
This model came from chiramue/bird-species-classifier. This has been retrained using ResNet50 in hopes to get it running using Transformers JS.
## Model Description
The "Bird Species Classifier" is a state-of-the-art image classification model designed to identify various bird species from images. It uses the EfficientNet architecture and has been fine-tuned to achieve high accuracy in recognizing a wide range of bird species.
### How to Use
You can easily use the model in your Python environment with the following code:
```python
from transformers import AutoFeatureExtractor, AutoModelForImageClassification
extractor = AutoFeatureExtractor.from_pretrained("chriamue/bird-species-classifier")
model = AutoModelForImageClassification.from_pretrained("chriamue/bird-species-classifier")
```
### Applications
- Bird species identification for educational or ecological research.
- Assistance in biodiversity monitoring and conservation efforts.
- Enhancing user experience in nature apps and platforms.
## Training Data
The model was trained on the "Bird Species" dataset, which is a comprehensive collection of bird images. Key features of this dataset include:
- **Total Species**: 525 bird species.
- **Training Images**: 84,635 images.
- **Validation Images**: 2,625 images.
- **Test Images**: 2,625 images.
- **Image Format**: Color images (224x224x3) in JPG format.
- **Source**: Sourced from Kaggle.
## Training Results
The model achieved impressive results after 6 epochs of training:
- **Accuracy**: 96.8%
- **Loss**: 0.1379
- **Runtime**: 136.81 seconds
- **Samples per Second**: 19.188
- **Steps per Second**: 1.206
- **Total Training Steps**: 31,740
These metrics indicate a high level of performance, making the model reliable for practical applications.
## Limitations and Bias
- The performance of the model might vary under different lighting conditions or image qualities.
- The model's accuracy is dependent on the diversity and representation in the training dataset. It may perform less effectively on bird species not well represented in the dataset.
## Ethical Considerations
This model should be used responsibly, considering privacy and environmental impacts. It should not be used for harmful purposes such as targeting endangered species or violating wildlife protection laws.
## Acknowledgements
We would like to acknowledge the creators of the dataset on Kaggle for providing a rich source of data that made this model possible.
## See also
- [Bird Species Dataset](https://huggingface.co/datasets/chriamue/bird-species-dataset)
- [Kaggle Dataset](https://www.kaggle.com/datasets/gpiosenka/100-bird-species/data)
- [Bird Species Classifier](https://huggingface.co/dennisjooo/Birds-Classifier-EfficientNetB2)
|
ZainAli60/miner_1 | ZainAli60 | 2024-03-11T00:59:16Z | 175 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-11T00:58:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
aliceDollMix/aliceDollMix_v2 | aliceDollMix | 2024-03-11T00:33:25Z | 0 | 7 | null | [
"stable-diffusion",
"ja",
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-03-09T09:07:38Z | ---
license: creativeml-openrail-m
language:
- ja
tags:
- stable-diffusion
---
# aliceDollMix_v2
<img src="https://huggingface.co/aliceDollMix/aliceDollMix_v2/resolve/main/images/header.png">
## Overview
- **aliceDollMix_v2** is a merged model specialized for doll-type expressions by merging various models.
- The balance of the body and the gloss of the hair are adjusted from the previous ver. The model has reduced the collapse of the hands.
- VAE is included, but please use VAE according to your preference.
- **No child pornography, please! Never!**
<hr>
## Recommended Settings
```
Steps:30
Sampler:DPM++ 2M Karras
CFG scale:7.5
Denoising strength:0.35 - 0.55
Hires steps:30
Hires upscaler:SwinlR_4x
Clip skip:2
```
Negative:
```
EasyNegativeV2,(worst quality, low quality),text
```
EasyNegativeV2<br>
[https://huggingface.co/gsdf/Counterfeit-V3.0/tree/main/embedding](https://huggingface.co/gsdf/Counterfeit-V3.0/tree/main/embedding)
<hr>
## Examples
<div>
<div style="display:flex; justify-content:center; align-items:top; flex-wrap:wrap;">
<div style="width:48%;margin-right:2%;margin-bottom:2em;">
<img src="https://huggingface.co/aliceDollMix/aliceDollMix_v2/resolve/main/images/sample-50549171.png" alt="1girl, kawaii, alice in wonderland, dancing with rabbits" style="margin-bottom:1em;">
1girl, kawaii, alice in wonderland, dancing with rabbits<br>
Seed:50549171
</div>
<div style="width:48%;margin-bottom:2em;">
<img src="https://huggingface.co/aliceDollMix/aliceDollMix_v2/resolve/main/images/sample-508031598.png" alt="1girl, kawaii, alice in wonderland, talking cheshire cat" style="margin-bottom:1em;">
1girl, kawaii, alice in wonderland, talking cheshire cat<br>
Seed:508031598
</div>
<div style="width:48%;margin-right:2%;margin-bottom:2em;">
<img src="https://huggingface.co/aliceDollMix/aliceDollMix_v2/resolve/main/images/sample-3627818516.png" alt="1girl, kawaii, alice in wonderland, fighting jabberwock" style="margin-bottom:1em;">
1girl, kawaii, alice in wonderland, fighting jabberwock<br>
Seed:3627818516
</div>
<div style="width:48%;margin-bottom:2em;">
<img src="https://huggingface.co/aliceDollMix/aliceDollMix_v2/resolve/main/images/sample-3306140414.png" alt="1girl, kawaii, alice, portrait" style="margin-bottom:1em;">
1girl, kawaii, alice, portrait<br>
Seed:3306140414
</div>
</div>
</div>
<hr>
## Tips
<img src="https://huggingface.co/aliceDollMix/aliceDollMix_v2/resolve/main/images/4154669915_default.png">
<details>
<summary>Prompt</summary>
```
(masterpiece, best quality), 1girl, kawaii, sky blue eyes, pink lip, (((blonde hair, bangs, long twintail, long straight hair))), (Cute pyjamas), (sitting), ((kawaii room,pastel color room)), gothic room, small window, (white bed),bookshelf and books, (small plants and flowers corner), (cute miscellaneous goods and stuffed Animals), dresser, mirror, messy room
Negative:EasyNegativeV2,(worst quality, low quality),text
```
</details>
The prompt "**realistic**" can be used to change the texture of the image as shown above.
<div>
<div style="display:flex; justify-content:center; align-items:center; flex-wrap:wrap;">
<div style="width:48%;margin-right:2%;margin-bottom:2em;">
<img src="https://huggingface.co/aliceDollMix/aliceDollMix_v2/resolve/main/images/4154669915_pos1.2.png" alt="realistic:1.33" style="margin-bottom:1em;">
realistic:1.2<br>
</div>
<div style="width:48%;margin-bottom:2em;">
<img src="https://huggingface.co/aliceDollMix/aliceDollMix_v2/resolve/main/images/4154669915_pos1.5.png" alt="realistic:1.61" style="margin-bottom:1em;">
realistic:1.5<br>
</div>
<div style="width:48%;margin-right:2%;margin-bottom:2em;">
<img src="https://huggingface.co/aliceDollMix/aliceDollMix_v2/resolve/main/images/4154669915_neg1.2.png" alt="Negative realistic:1.33" style="margin-bottom:1em;">
Negative:realistic:1.2<br>
</div>
<div style="width:48%;margin-bottom:2em;">
<img src="https://huggingface.co/aliceDollMix/aliceDollMix_v2/resolve/main/images/4154669915_neg1.5.png" alt="Negative realistic:1.61" style="margin-bottom:1em;">
Negative:realistic:1.5<br>
</div>
</div>
</div>
Adjust the value of "**realistic**" based on the condition of the original image to get the desired texture.
<hr>
## License
❌ = Not allowed / ✅ = Allowed<br>
❌ Intentionally create or share any illegal or harmful output or content using this model
❌ Have different permissions when sharing
❌ Use of this model for commercial image generation services
❌ The act of selling this model or a model merged with this model
❌ The act of not sharing a copy of CreativeML OpenRAIL-M with all users, including the same usage restrictions when distributing or redistributing a merged model of this model.
✅ Commercial use of images generated by this model. However, illegal or harmful images are prohibited.
✅ Use or redistribution of merged models using this model
✅ Use of this model without crediting the model
❌ Violation of the following description
<br>
### **CreativeML OpenRAIL-M license**
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:
<ol>
<li>You can't use the model to deliberately produce nor share illegal or harmful outputs or content</li>
<li>The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license</li>
<li>You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here:<a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license">https://huggingface.co/spaces/CompVis/stable-diffusion-license</a></li>
</ol> |
hamzasidat/DistilBertResults3 | hamzasidat | 2024-03-11T00:30:08Z | 177 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-11T00:29:55Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: DistilBertResults3
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9375
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilBertResults3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1496
- Accuracy: 0.9375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2242 | 1.0 | 1000 | 0.1795 | 0.929 |
| 0.1287 | 2.0 | 2000 | 0.1496 | 0.9375 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
aliceDollMix/aliceDollMix | aliceDollMix | 2024-03-11T00:25:10Z | 0 | 31 | null | [
"stable-diffusion",
"ja",
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-05T02:40:11Z | ---
license: creativeml-openrail-m
language:
- ja
tags:
- stable-diffusion
---
# aliceDollMix
<img src="https://huggingface.co/aliceDollMix/aliceDollMix/resolve/main/images/header.png">
## Overview
- **aliceDollMix** is a merged model specialized for doll-type expressions by merging various models.
- VAE is included, but please use VAE according to your preference.
- **No child pornography, please! Never!**
<hr>
## Recommended Settings
```
Steps:30 ~ 60
Sampler:DPM++ SDE Karras
CFG scale:9
Denoising strength:0.35~0.55
Hires steps:30
Hires upscaler:SwinlR_4x
Clip skip:2
```
Negative:
```
EasyNegativeV2,negative_hand-neg,(worst quality, low quality:1.2),(flat shading,flat painting:1.3), text,nsfw,
```
EasyNegativeV2<br>
[https://huggingface.co/gsdf/Counterfeit-V3.0/tree/main/embedding](https://huggingface.co/gsdf/Counterfeit-V3.0/tree/main/embedding)
negative_hand-neg<br>
[https://civitai.com/models/56519/negativehand-negative-embedding](https://civitai.com/models/56519/negativehand-negative-embedding)
<hr>
## Examples
<div>
<div style="display:flex; justify-content:center; align-items:center; flex-wrap:wrap;">
<div style="width:48%;margin-right:2%;margin-bottom:2em;">
<img src="https://huggingface.co/aliceDollMix/aliceDollMix/resolve/main/images/sample-3499369929.png" alt="1girl,kawaii,alice in wonderland,Dancing with the Rabbits" style="margin-bottom:1em;">
1girl,kawaii,alice in wonderland,Dancing with the Rabbits<br>
Seed:3499369929
</div>
<div style="width:48%;margin-bottom:2em;">
<img src="https://huggingface.co/aliceDollMix/aliceDollMix/resolve/main/images/sample-2412994815.png" alt="1girl,kawaii,alice in wonderland,talking to the Cheshire Cat" style="margin-bottom:1em;">
1girl,kawaii,alice in wonderland,talking to the Cheshire Cat<br>
Seed:2412994815
</div>
<div style="width:48%;margin-right:2%;margin-bottom:2em;">
<img src="https://huggingface.co/aliceDollMix/aliceDollMix/resolve/main/images/sample-2412994814.png" alt="1girl,kawaii,alice in wonderland,fighting the Jabberwock" style="margin-bottom:1em;">
1girl,kawaii,alice in wonderland,fighting the Jabberwock<br>
Seed:2412994814
</div>
<div style="width:48%;margin-bottom:2em;">
<img src="https://huggingface.co/aliceDollMix/aliceDollMix/resolve/main/images/sample-3472302643.png" alt="1girl,face focus,portrait photography,Rembrandt lighting" style="margin-bottom:1em;">
1girl,face focus,portrait photography,Rembrandt lighting<br>
Seed:3472302643
</div>
</div>
</div>
<hr>
## Tips
<img src="https://huggingface.co/aliceDollMix/aliceDollMix/resolve/main/images/3207700638_default.png">
<details>
<summary>Prompt</summary>
```
(masterpiece, best quality),
1girl,kawaii,baby face,(sky blue eyes,slanted eyes,round eyes),pink lip, (blonde hair,bangs,long twintail,long straight hair:1.3),flat body,flat chest,
(Cute pyjamas),(sitting),(waving),
(kawaii room,pastel color room:1.2),gothic room,small window,(white bed),bookshelf and books,(small plants and flowers corner),dresser,mirror,messy room,(cute miscellaneous goods and stuffed Animals)
Negative:
EasyNegativeV2,(extra fingers,fewer fingers),(worst quality, low quality:1.2),(flat shading,flat painting:1.3), text,nsfw,
```
</details>
The prompt "**photorealistic**" can be used to change the texture of the image as shown above.
<div>
<div style="display:flex; justify-content:center; align-items:center; flex-wrap:wrap;">
<div style="width:48%;margin-right:2%;margin-bottom:2em;">
<img src="https://huggingface.co/aliceDollMix/aliceDollMix/resolve/main/images/3207700638_pos1.3.png" alt="photorealistic:1.3" style="margin-bottom:1em;">
photorealistic:1.3<br>
</div>
<div style="width:48%;margin-bottom:2em;">
<img src="https://huggingface.co/aliceDollMix/aliceDollMix/resolve/main/images/3207700638_pos1.6.png" alt="photorealistic:1.6" style="margin-bottom:1em;">
photorealistic:1.6<br>
</div>
<div style="width:48%;margin-right:2%;margin-bottom:2em;">
<img src="https://huggingface.co/aliceDollMix/aliceDollMix/resolve/main/images/3207700638_neg1.3.png" alt="Negative photorealistic:1.3" style="margin-bottom:1em;">
Negative photorealistic:1.3<br>
</div>
<div style="width:48%;margin-bottom:2em;">
<img src="https://huggingface.co/aliceDollMix/aliceDollMix/resolve/main/images/3207700638_neg1.6.png" alt="Negative photorealistic:1.6" style="margin-bottom:1em;">
Negative photorealistic:1.6<br>
</div>
</div>
</div>
Adjust the value of "**photorealistic**" based on the condition of the original image to get the desired texture.
<hr>
## License
❌ = Not allowed / ✅ = Allowed<br>
❌ Intentionally create or share any illegal or harmful output or content using this model
❌ Have different permissions when sharing
❌ Use of this model for commercial image generation services
❌ The act of selling this model or a model merged with this model
❌ The act of not sharing a copy of CreativeML OpenRAIL-M with all users, including the same usage restrictions when distributing or redistributing a merged model of this model.
✅ Commercial use of images generated by this model. However, illegal or harmful images are prohibited.
✅ Use or redistribution of merged models using this model
✅ Use of this model without crediting the model
❌ Violation of the following description
<br>
### **CreativeML OpenRAIL-M license**
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:
<ol>
<li>You can't use the model to deliberately produce nor share illegal or harmful outputs or content</li>
<li>The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license</li>
<li>You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here:<a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license">https://huggingface.co/spaces/CompVis/stable-diffusion-license</a></li>
</ol> |
Guilherme34/Samantha-pygmalion-mistral-7b | Guilherme34 | 2024-03-11T00:20:15Z | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Delcos/Mistral-Pygmalion-7b",
"base_model:adapter:Delcos/Mistral-Pygmalion-7b",
"region:us"
] | null | 2024-03-11T00:19:41Z | ---
library_name: peft
base_model: Delcos/Mistral-Pygmalion-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.9.1.dev0 |
NorGLM/NbAiLab-6B-NO-MRPC-peft | NorGLM | 2024-03-11T00:19:12Z | 0 | 0 | null | [
"no",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2024-03-11T00:17:40Z | ---
license: cc-by-nc-sa-4.0
language:
- 'no'
---
# Model Card
NbAiLab-6B-NO-MRPC-peft is trained on top of [NbAiLab/nb-gpt-j-6B](https://huggingface.co/NbAiLab/nb-gpt-j-6B) model on [NO-MRPC](https://huggingface.co/datasets/NorGLM/NO-MRPC) dataset.
Data format:
```
input: {text_a}[SEP]{text_b}
label: {0, 1}
```
## Run the Model
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
torch_device = "cuda" if torch.cuda.is_available() else "cpu"
source_model_id = "NbAiLab/nb-gpt-j-6B"
peft_model_id = "NorGLM/NbAiLab-6B-NO-MRPC-peft"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(source_model_id, device_map='balanced')
tokenizer_max_len = 2048
tokenizer_config = {'pretrained_model_name_or_path': source_model_id,
'max_len': tokenizer_max_len}
tokenizer = tokenizer = AutoTokenizer.from_pretrained(**tokenizer_config)
tokenizer.pad_token = tokenizer.eos_token
model = PeftModel.from_pretrained(model, peft_model_id)
```
## Inference Example
Load the model to evaluate on the validation set:
```python
def getDataSetFromFiles(df):
# convert dataset
df["text"] = df[["text_a", "text_b"]].apply(lambda x: " [SEP] ".join(x.astype(str)), axis =1)
df = df.drop(["idx", "text_a", "text_b"], axis=1)
df["label"] = df.label.map({0: 0, 1: 1})
return Dataset.from_pandas(df)
print("--LOADING EVAL DATAS---")
eval_data = load_dataset("NorGLM/NO-MRPC", data_files="val.jsonl")
eval_data = getDataSetFromFiles(eval_data["train"].to_pandas())
print("--MAKING PREDICTIONS---")
model.eval()
y_true = []
y_pred = []
count = 0
for data in eval_data:
count = count + 1
if count % 100 == 0:
print(count)
inputs = tokenizer(data['text'], return_tensors="pt").to(torch_device)
with torch.no_grad():
logits = model(**inputs).logits
#print(logits)
predicted_class_id = logits.argmax().item()
y_true.append(data['label'])
y_pred.append(predicted_class_id)
print(y_pred)
print(f"Lenght of true_values: {len(y_true)}")
print(f"Lenght of predicted_values: {len(y_pred)}")
y_true = np.array(y_true)
y_pred = np.array(y_pred)
F_score = f1_score(y_true, y_pred, average="macro")
print(f"F1 score: {F_score}")
accuracy = accuracy_score(y_true, y_pred)
print(f"Accuracy: {accuracy}")
```
## Note
More training details will be released soon! |
furrutiav/bert_qa_extractor_cockatiel_2022_ulra_org_adamw_lf_signal_it_1 | furrutiav | 2024-03-11T00:18:24Z | 91 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-03-11T00:17:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
NorGLM/NorLLama-3B-NO-MRPC-peft | NorGLM | 2024-03-11T00:17:11Z | 0 | 0 | null | [
"no",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2024-03-11T00:15:43Z | ---
license: cc-by-nc-sa-4.0
language:
- 'no'
---
# Model Card
NorLLama-3B-NO-MRPC-peft is trained on top of [NorLLama-3B](https://huggingface.co/NorGLM/NorLLama-3B) model on [NO-MRPC](https://huggingface.co/datasets/NorGLM/NO-MRPC) dataset.
Data format:
```
input: {text_a}[SEP]{text_b}
label: {0, 1}
```
## Run the Model
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
torch_device = "cuda" if torch.cuda.is_available() else "cpu"
source_model_id = "NorGLM/NorLLama-3B"
peft_model_id = "NorGLM/NorLLama-3B-NO-MRPC-peft"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(source_model_id, device_map='balanced')
tokenizer_max_len = 2048
tokenizer_config = {'pretrained_model_name_or_path': source_model_id,
'max_len': tokenizer_max_len}
tokenizer = tokenizer = AutoTokenizer.from_pretrained(**tokenizer_config)
tokenizer.pad_token = tokenizer.eos_token
model = PeftModel.from_pretrained(model, peft_model_id)
```
## Inference Example
Load the model to evaluate on the validation set:
```python
def getDataSetFromFiles(df):
# convert dataset
df["text"] = df[["text_a", "text_b"]].apply(lambda x: " [SEP] ".join(x.astype(str)), axis =1)
df = df.drop(["idx", "text_a", "text_b"], axis=1)
df["label"] = df.label.map({0: 0, 1: 1})
return Dataset.from_pandas(df)
print("--LOADING EVAL DATAS---")
eval_data = load_dataset("NorGLM/NO-MRPC", data_files="val.jsonl")
eval_data = getDataSetFromFiles(eval_data["train"].to_pandas())
print("--MAKING PREDICTIONS---")
model.eval()
y_true = []
y_pred = []
count = 0
for data in eval_data:
count = count + 1
if count % 100 == 0:
print(count)
inputs = tokenizer(data['text'], return_tensors="pt").to(torch_device)
with torch.no_grad():
logits = model(**inputs).logits
#print(logits)
predicted_class_id = logits.argmax().item()
y_true.append(data['label'])
y_pred.append(predicted_class_id)
print(y_pred)
print(f"Lenght of true_values: {len(y_true)}")
print(f"Lenght of predicted_values: {len(y_pred)}")
y_true = np.array(y_true)
y_pred = np.array(y_pred)
F_score = f1_score(y_true, y_pred, average="macro")
print(f"F1 score: {F_score}")
accuracy = accuracy_score(y_true, y_pred)
print(f"Accuracy: {accuracy}")
```
## Note
More training details will be released soon! |
jeonsiyun/layoutlmv3-v29-epoch20 | jeonsiyun | 2024-03-11T00:16:22Z | 119 | 0 | transformers | [
"transformers",
"safetensors",
"layoutlmv3",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-11T00:16:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jeonsiyun/layoutlmv3-v29-epoch30 | jeonsiyun | 2024-03-11T00:15:09Z | 119 | 0 | transformers | [
"transformers",
"safetensors",
"layoutlmv3",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-11T00:14:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
NorGLM/NorGPT-3B-NO-MRPC-peft | NorGLM | 2024-03-11T00:11:59Z | 0 | 0 | null | [
"no",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2024-03-11T00:10:03Z | ---
license: cc-by-nc-sa-4.0
language:
- 'no'
---
# Model Card
NorGPT-3B-NO-MRPC-peft is trained on top of [NorGPT-3B](https://huggingface.co/NorGLM/NorGPT-3B) model on [NO-MRPC](https://huggingface.co/datasets/NorGLM/NO-MRPC) dataset.
Data format:
```
input: {text_a}[SEP]{text_b}
label: {0, 1}
```
## Run the Model
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
torch_device = "cuda" if torch.cuda.is_available() else "cpu"
source_model_id = "NorGLM/NorGPT-3B"
peft_model_id = "NorGLM/NorGPT-3B-NO-MRPC-peft"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(source_model_id, device_map='balanced')
tokenizer_max_len = 2048
tokenizer_config = {'pretrained_model_name_or_path': source_model_id,
'max_len': tokenizer_max_len}
tokenizer = tokenizer = AutoTokenizer.from_pretrained(**tokenizer_config)
tokenizer.pad_token = tokenizer.eos_token
model = PeftModel.from_pretrained(model, peft_model_id)
```
## Inference Example
Load the model to evaluate on the validation set:
```python
def getDataSetFromFiles(df):
# convert dataset
df["text"] = df[["text_a", "text_b"]].apply(lambda x: " [SEP] ".join(x.astype(str)), axis =1)
df = df.drop(["idx", "text_a", "text_b"], axis=1)
df["label"] = df.label.map({0: 0, 1: 1})
return Dataset.from_pandas(df)
print("--LOADING EVAL DATAS---")
eval_data = load_dataset("NorGLM/NO-MRPC", data_files="val.jsonl")
eval_data = getDataSetFromFiles(eval_data["train"].to_pandas())
print("--MAKING PREDICTIONS---")
model.eval()
y_true = []
y_pred = []
count = 0
for data in eval_data:
count = count + 1
if count % 100 == 0:
print(count)
inputs = tokenizer(data['text'], return_tensors="pt").to(torch_device)
with torch.no_grad():
logits = model(**inputs).logits
#print(logits)
predicted_class_id = logits.argmax().item()
y_true.append(data['label'])
y_pred.append(predicted_class_id)
print(y_pred)
print(f"Lenght of true_values: {len(y_true)}")
print(f"Lenght of predicted_values: {len(y_pred)}")
y_true = np.array(y_true)
y_pred = np.array(y_pred)
F_score = f1_score(y_true, y_pred, average="macro")
print(f"F1 score: {F_score}")
accuracy = accuracy_score(y_true, y_pred)
print(f"Accuracy: {accuracy}")
```
## Note
More training details will be released soon! |
NorGLM/NbAiLab-6B-NO-QNLI-peft | NorGLM | 2024-03-11T00:09:39Z | 0 | 0 | null | [
"no",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2024-03-10T23:58:57Z | ---
license: cc-by-nc-sa-4.0
language:
- 'no'
---
# Model Card
NbAiLab-6B-NO-QNLI-peft is trained on top of [NbAiLab/nb-gpt-j-6B](https://huggingface.co/NbAiLab/nb-gpt-j-6B) model on [NO-QNLI](https://huggingface.co/datasets/NorGLM/NO-QNLI) dataset.
Data format:
```
input: {premise}[SEP]{hypothesis}
label: {entailment, not_entailment} -> {1,0}
```
## Run the Model
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
torch_device = "cuda" if torch.cuda.is_available() else "cpu"
source_model_id = "NbAiLab/nb-gpt-j-6B"
peft_model_id = "NorGLM/NbAiLab-6B-NO-QNLI-peft"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(source_model_id, device_map='balanced')
tokenizer_max_len = 2048
tokenizer_config = {'pretrained_model_name_or_path': source_model_id,
'max_len': tokenizer_max_len}
tokenizer = tokenizer = AutoTokenizer.from_pretrained(**tokenizer_config)
tokenizer.pad_token = tokenizer.eos_token
model = PeftModel.from_pretrained(model, peft_model_id)
```
## Inference Example
Load the model to evaluate on the validation set:
```python
def getDataSetFromFiles(df):
# convert dataset
df["text"] = df[["premise", "hypothesis"]].apply(lambda x: " [SEP] ".join(x.astype(str)), axis =1)
df = df.drop(["idx", "premise", "hypothesis"], axis=1)
#df['label'] = df['label'].replace({1:'contradiction', -1:'entailment', 0:'neutral'})
df["label"] = df.label.map({"not_entailment": 0, "entailment": 1})
return Dataset.from_pandas(df)
print("--LOADING EVAL DATAS---")
eval_data = load_dataset("NorGLM/NO-QNLI", data_files="val.jsonl")
eval_data = getDataSetFromFiles(eval_data["train"].to_pandas())
print("--MAKING PREDICTIONS---")
model.eval()
y_true = []
y_pred = []
count = 0
for data in eval_data:
count = count + 1
if count % 100 == 0:
print(count)
inputs = tokenizer(data['text'], return_tensors="pt").to(torch_device)
with torch.no_grad():
logits = model(**inputs).logits
#print(logits)
predicted_class_id = logits.argmax().item()
y_true.append(data['label'])
y_pred.append(predicted_class_id)
print(y_pred)
print(f"Lenght of true_values: {len(y_true)}")
print(f"Lenght of predicted_values: {len(y_pred)}")
y_true = np.array(y_true)
y_pred = np.array(y_pred)
F_score = f1_score(y_true, y_pred, average="macro")
print(f"F1 score: {F_score}")
accuracy = accuracy_score(y_true, y_pred)
print(f"Accuracy: {accuracy}")
```
## Note
More training details will be released soon! |
NorGLM/NorGPT-3B-continue-NO-QNLI-peft | NorGLM | 2024-03-11T00:08:47Z | 0 | 0 | null | [
"no",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2024-03-10T23:53:53Z | ---
license: cc-by-nc-sa-4.0
language:
- 'no'
---
# Model Card
NorGPT-3B-continue-NO-QNLI-peft is trained on top of [NorGPT-3B-continue](https://huggingface.co/NorGLM/NorGPT-3B-continue) model on [NO-QNLI](https://huggingface.co/datasets/NorGLM/NO-QNLI) dataset.
Data format:
```
input: {premise}[SEP]{hypothesis}
label: {entailment, not_entailment} -> {1,0}
```
## Run the Model
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
torch_device = "cuda" if torch.cuda.is_available() else "cpu"
source_model_id = "NorGLM/NorGPT-3B-continue"
peft_model_id = "NorGLM/NorGPT-3B-continue-NO-QNLI-peft"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(source_model_id, device_map='balanced')
tokenizer_max_len = 2048
tokenizer_config = {'pretrained_model_name_or_path': source_model_id,
'max_len': tokenizer_max_len}
tokenizer = tokenizer = AutoTokenizer.from_pretrained(**tokenizer_config)
tokenizer.pad_token = tokenizer.eos_token
model = PeftModel.from_pretrained(model, peft_model_id)
```
## Inference Example
Load the model to evaluate on the validation set:
```python
def getDataSetFromFiles(df):
# convert dataset
df["text"] = df[["premise", "hypothesis"]].apply(lambda x: " [SEP] ".join(x.astype(str)), axis =1)
df = df.drop(["idx", "premise", "hypothesis"], axis=1)
#df['label'] = df['label'].replace({1:'contradiction', -1:'entailment', 0:'neutral'})
df["label"] = df.label.map({"not_entailment": 0, "entailment": 1})
return Dataset.from_pandas(df)
print("--LOADING EVAL DATAS---")
eval_data = load_dataset("NorGLM/NO-QNLI", data_files="val.jsonl")
eval_data = getDataSetFromFiles(eval_data["train"].to_pandas())
print("--MAKING PREDICTIONS---")
model.eval()
y_true = []
y_pred = []
count = 0
for data in eval_data:
count = count + 1
if count % 100 == 0:
print(count)
inputs = tokenizer(data['text'], return_tensors="pt").to(torch_device)
with torch.no_grad():
logits = model(**inputs).logits
#print(logits)
predicted_class_id = logits.argmax().item()
y_true.append(data['label'])
y_pred.append(predicted_class_id)
print(y_pred)
print(f"Lenght of true_values: {len(y_true)}")
print(f"Lenght of predicted_values: {len(y_pred)}")
y_true = np.array(y_true)
y_pred = np.array(y_pred)
F_score = f1_score(y_true, y_pred, average="macro")
print(f"F1 score: {F_score}")
accuracy = accuracy_score(y_true, y_pred)
print(f"Accuracy: {accuracy}")
```
## Note
More training details will be released soon! |
NorGLM/NorGPT-369M-NO-QNLI-peft | NorGLM | 2024-03-11T00:08:20Z | 0 | 0 | null | [
"no",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2024-03-10T23:45:24Z | ---
license: cc-by-nc-sa-4.0
language:
- 'no'
---
# Model Card
NorGPT-369M-NO-QNLI-peft is trained on top of [NorGPT-369M](https://huggingface.co/NorGLM/NorGPT-369M) model on [NO-QNLI](https://huggingface.co/datasets/NorGLM/NO-QNLI) dataset.
Data format:
```
input: {premise}[SEP]{hypothesis}
label: {entailment, not_entailment} -> {1,0}
```
## Run the Model
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
torch_device = "cuda" if torch.cuda.is_available() else "cpu"
source_model_id = "NorGLM/NorGPT-369M"
peft_model_id = "NorGLM/NorGPT-369M-NO-QNLI-peft"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(source_model_id, device_map='balanced')
tokenizer_max_len = 2048
tokenizer_config = {'pretrained_model_name_or_path': source_model_id,
'max_len': tokenizer_max_len}
tokenizer = tokenizer = AutoTokenizer.from_pretrained(**tokenizer_config)
tokenizer.pad_token = tokenizer.eos_token
model = PeftModel.from_pretrained(model, peft_model_id)
```
## Inference Example
Load the model to evaluate on the validation set:
```python
def getDataSetFromFiles(df):
# convert dataset
df["text"] = df[["premise", "hypothesis"]].apply(lambda x: " [SEP] ".join(x.astype(str)), axis =1)
df = df.drop(["idx", "premise", "hypothesis"], axis=1)
#df['label'] = df['label'].replace({1:'contradiction', -1:'entailment', 0:'neutral'})
df["label"] = df.label.map({"not_entailment": 0, "entailment": 1})
return Dataset.from_pandas(df)
print("--LOADING EVAL DATAS---")
eval_data = load_dataset("NorGLM/NO-QNLI", data_files="val.jsonl")
eval_data = getDataSetFromFiles(eval_data["train"].to_pandas())
print("--MAKING PREDICTIONS---")
model.eval()
y_true = []
y_pred = []
count = 0
for data in eval_data:
count = count + 1
if count % 100 == 0:
print(count)
inputs = tokenizer(data['text'], return_tensors="pt").to(torch_device)
with torch.no_grad():
logits = model(**inputs).logits
#print(logits)
predicted_class_id = logits.argmax().item()
y_true.append(data['label'])
y_pred.append(predicted_class_id)
print(y_pred)
print(f"Lenght of true_values: {len(y_true)}")
print(f"Lenght of predicted_values: {len(y_pred)}")
y_true = np.array(y_true)
y_pred = np.array(y_pred)
F_score = f1_score(y_true, y_pred, average="macro")
print(f"F1 score: {F_score}")
accuracy = accuracy_score(y_true, y_pred)
print(f"Accuracy: {accuracy}")
```
## Note
More training details will be released soon! |
Davada/subnet6 | Davada | 2024-03-11T00:08:11Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T23:32:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lilyray/albert_irony | lilyray | 2024-03-11T00:07:58Z | 120 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"albert",
"text-classification",
"generated_from_trainer",
"base_model:lilyray/albert_irony",
"base_model:finetune:lilyray/albert_irony",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-10T01:50:03Z | ---
license: apache-2.0
base_model: lilyray/albert_irony
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: albert_irony
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert_irony
This model is a fine-tuned version of [lilyray/albert_irony](https://huggingface.co/lilyray/albert_irony) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6130
- Accuracy: 0.6901
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.547052605472227e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 358 | 0.6295 | 0.6733 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
NorGLM/NorGPT-369M-NO-MRPC-peft | NorGLM | 2024-03-11T00:07:22Z | 0 | 0 | null | [
"no",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2024-03-11T00:02:01Z | ---
license: cc-by-nc-sa-4.0
language:
- 'no'
---
# Model Card
NorGPT-369M-NO-MRPC-peft is trained on top of [NorGPT-369M](https://huggingface.co/NorGLM/NorGPT-369M) model on [NO-MRPC](https://huggingface.co/datasets/NorGLM/NO-MRPC) dataset.
Data format:
```
input: {text_a}[SEP]{text_b}
label: {0, 1}
```
## Run the Model
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
torch_device = "cuda" if torch.cuda.is_available() else "cpu"
source_model_id = "NorGLM/NorGPT-369M"
peft_model_id = "NorGLM/NorGPT-369M-NO-MRPC-peft"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(source_model_id, device_map='balanced')
tokenizer_max_len = 2048
tokenizer_config = {'pretrained_model_name_or_path': source_model_id,
'max_len': tokenizer_max_len}
tokenizer = tokenizer = AutoTokenizer.from_pretrained(**tokenizer_config)
tokenizer.pad_token = tokenizer.eos_token
model = PeftModel.from_pretrained(model, peft_model_id)
```
## Inference Example
Load the model to evaluate on the validation set:
```python
def getDataSetFromFiles(df):
# convert dataset
df["text"] = df[["text_a", "text_b"]].apply(lambda x: " [SEP] ".join(x.astype(str)), axis =1)
df = df.drop(["idx", "text_a", "text_b"], axis=1)
df["label"] = df.label.map({0: 0, 1: 1})
return Dataset.from_pandas(df)
print("--LOADING EVAL DATAS---")
eval_data = load_dataset("NorGLM/NO-MRPC", data_files="val.jsonl")
eval_data = getDataSetFromFiles(eval_data["train"].to_pandas())
print("--MAKING PREDICTIONS---")
model.eval()
y_true = []
y_pred = []
count = 0
for data in eval_data:
count = count + 1
if count % 100 == 0:
print(count)
inputs = tokenizer(data['text'], return_tensors="pt").to(torch_device)
with torch.no_grad():
logits = model(**inputs).logits
#print(logits)
predicted_class_id = logits.argmax().item()
y_true.append(data['label'])
y_pred.append(predicted_class_id)
print(y_pred)
print(f"Lenght of true_values: {len(y_true)}")
print(f"Lenght of predicted_values: {len(y_pred)}")
y_true = np.array(y_true)
y_pred = np.array(y_pred)
F_score = f1_score(y_true, y_pred, average="macro")
print(f"F1 score: {F_score}")
accuracy = accuracy_score(y_true, y_pred)
print(f"Accuracy: {accuracy}")
```
## Note
More training details will be released soon! |
Holarissun/phi2-aisft-synhh-seqsampler-subset30000 | Holarissun | 2024-03-11T00:06:22Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-03-11T00:06:12Z | ---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: microsoft/phi-2
model-index:
- name: phi2-aisft-synhh-seqsampler-subset30000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi2-aisft-synhh-seqsampler-subset30000
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
hamzasidat/Hamzas_Albert_Irony3 | hamzasidat | 2024-03-11T00:05:08Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-11T00:05:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hamzasidat/AlbertIronyResults3 | hamzasidat | 2024-03-11T00:05:03Z | 178 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"albert",
"text-classification",
"generated_from_trainer",
"base_model:albert/albert-base-v2",
"base_model:finetune:albert/albert-base-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-11T00:04:51Z | ---
license: apache-2.0
base_model: albert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: AlbertIronyResults3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AlbertIronyResults3
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6017
- Accuracy: 0.6764
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 179 | 0.6639 | 0.5958 |
| No log | 2.0 | 358 | 0.6017 | 0.6764 |
| 0.5558 | 3.0 | 537 | 0.6362 | 0.6869 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Litzy619/V0309P3 | Litzy619 | 2024-03-11T00:00:49Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-03-10T06:42:48Z | ---
license: mit
base_model: microsoft/phi-2
tags:
- generated_from_trainer
model-index:
- name: V0309P3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0309P3
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 20
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9399 | 0.09 | 10 | 0.3747 |
| 0.1877 | 0.17 | 20 | 0.0934 |
| 0.1061 | 0.26 | 30 | 0.0782 |
| 0.0988 | 0.34 | 40 | 0.0751 |
| 0.0879 | 0.43 | 50 | 0.0729 |
| 0.0823 | 0.51 | 60 | 0.0776 |
| 0.0735 | 0.6 | 70 | 0.0698 |
| 0.0775 | 0.68 | 80 | 0.0778 |
| 0.0716 | 0.77 | 90 | 0.0703 |
| 0.0687 | 0.85 | 100 | 0.0701 |
| 0.0718 | 0.94 | 110 | 0.0686 |
| 0.0679 | 1.02 | 120 | 0.0699 |
| 0.0579 | 1.11 | 130 | 0.0769 |
| 0.0559 | 1.19 | 140 | 0.0664 |
| 0.0527 | 1.28 | 150 | 0.0621 |
| 0.05 | 1.37 | 160 | 0.0753 |
| 0.0526 | 1.45 | 170 | 0.0628 |
| 0.0499 | 1.54 | 180 | 0.0685 |
| 0.0487 | 1.62 | 190 | 0.0711 |
| 0.0514 | 1.71 | 200 | 0.0705 |
| 0.0572 | 1.79 | 210 | 0.0724 |
| 0.0487 | 1.88 | 220 | 0.0700 |
| 0.0485 | 1.96 | 230 | 0.0693 |
| 0.0405 | 2.05 | 240 | 0.0706 |
| 0.0338 | 2.13 | 250 | 0.0833 |
| 0.0319 | 2.22 | 260 | 0.0897 |
| 0.0277 | 2.3 | 270 | 0.0941 |
| 0.0351 | 2.39 | 280 | 0.0891 |
| 0.0333 | 2.47 | 290 | 0.0839 |
| 0.0352 | 2.56 | 300 | 0.0867 |
| 0.0357 | 2.65 | 310 | 0.0839 |
| 0.0304 | 2.73 | 320 | 0.0842 |
| 0.0308 | 2.82 | 330 | 0.0859 |
| 0.0291 | 2.9 | 340 | 0.0856 |
| 0.0335 | 2.99 | 350 | 0.0857 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
rk68/phi-1_5-finetuned-aqua-rat-qlora-gemma-teacher-1000 | rk68 | 2024-03-10T23:58:50Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-1_5",
"base_model:adapter:microsoft/phi-1_5",
"license:mit",
"region:us"
] | null | 2024-03-10T23:49:44Z | ---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: microsoft/phi-1_5
model-index:
- name: phi-1_5-finetuned-aqua-rat-qlora-gemma-teacher-1000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-1_5-finetuned-aqua-rat-qlora-gemma-teacher-1000
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 1000
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
nold/MediKAI-GGUF | nold | 2024-03-10T23:58:10Z | 20 | 0 | null | [
"gguf",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-03-10T15:35:00Z | ---
license: other
---
# MediKAI - Your Healthcare Companion 🏥💬
Welcome to mediKAI, the latest healthcare-focused model by HelpingAI designed to provide personalized assistance and support in medical-related queries.
## Overview
mediKAI is a 14 billion parameters model that specializes in healthcare-related topics and medical assistance. Whether you have questions about symptoms, treatments, medications, or general health and wellness, mediKAI is here to help.
## Languages Supported
- English
- French
- Hindi
- Spanish
- Arabic
```
***
Quantization of Model [OEvortex/MediKAI](https://huggingface.co/OEvortex/MediKAI).
Created using [llm-quantizer](https://github.com/Nold360/llm-quantizer) Pipeline
|
dzakwan/cybersec | dzakwan | 2024-03-10T23:56:53Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gemma",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T15:53:39Z | ---
library_name: transformers
widget:
- messages:
- role: user
content: >-
We need to prepare for the possibility of a security incident. Can you
create an incident response plan for our organization?
inference:
parameters:
max_new_tokens: 200
tags:
- unsloth
- trl
- sft
language:
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** M Dzakwan Falih
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** English
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
shapermindai/SequinCode-7b | shapermindai | 2024-03-10T23:56:33Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"region:us"
] | null | 2024-03-10T23:16:08Z | ---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 |
NorGLM/NbAiLab-6B-NO-BoolQ-peft | NorGLM | 2024-03-10T23:42:00Z | 0 | 0 | null | [
"no",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2024-03-10T23:40:12Z | ---
license: cc-by-nc-sa-4.0
language:
- 'no'
---
# Model Card
NbAiLab-6B-NO-BoolQ-peft is trained on top of [NbAiLab/nb-gpt-j-6B](https://huggingface.co/NbAiLab/nb-gpt-j-6B) model on [NO-BoolQ](https://huggingface.co/datasets/NorGLM/NO-BoolQ) dataset.
Data format:
```
input: {passage}[SEP]{question}
label: {True, False} -> {1,0}
```
## Run the Model
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
torch_device = "cuda" if torch.cuda.is_available() else "cpu"
source_model_id = "NbAiLab/nb-gpt-j-6B"
peft_model_id = "NorGLM/NbAiLab-6B-NO-BoolQ-peft"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(source_model_id, device_map='balanced')
tokenizer_max_len = 2048
tokenizer_config = {'pretrained_model_name_or_path': source_model_id,
'max_len': tokenizer_max_len}
tokenizer = tokenizer = AutoTokenizer.from_pretrained(**tokenizer_config)
tokenizer.pad_token = tokenizer.eos_token
model = PeftModel.from_pretrained(model, peft_model_id)
```
## Inference Example
Load the model to evaluate on the validation set:
```python
def getDataSetFromFiles(df):
# convert dataset
df["text"] = df[["passage", "question"]].apply(lambda x: " [SEP] ".join(x.astype(str)), axis =1)
df = df.drop(["idx", "passage", "question"], axis=1)
#df['label'] = df['label'].replace({1:'contradiction', -1:'entailment', 0:'neutral'})
df["label"] = df.label.map({True: 1, False: 0})
return Dataset.from_pandas(df)
print("--LOADING EVAL DATAS---")
eval_data = load_dataset("NorGLM/NO-BoolQ", data_files="val.jsonl")
eval_data = getDataSetFromFiles(eval_data["train"].to_pandas())
print("--MAKING PREDICTIONS---")
model.eval()
y_true = []
y_pred = []
count = 0
for data in eval_data:
count = count + 1
if count % 100 == 0:
print(count)
inputs = tokenizer(data['text'], return_tensors="pt").to(torch_device)
with torch.no_grad():
logits = model(**inputs).logits
#print(logits)
predicted_class_id = logits.argmax().item()
y_true.append(data['label'])
y_pred.append(predicted_class_id)
print(y_pred)
print(f"Lenght of true_values: {len(y_true)}")
print(f"Lenght of predicted_values: {len(y_pred)}")
y_true = np.array(y_true)
y_pred = np.array(y_pred)
F_score = f1_score(y_true, y_pred, average="macro")
print(f"F1 score: {F_score}")
accuracy = accuracy_score(y_true, y_pred)
print(f"Accuracy: {accuracy}")
```
## Note
More training details will be released soon! |
btmiller/output | btmiller | 2024-03-10T23:37:44Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:google/flan-t5-small",
"base_model:adapter:google/flan-t5-small",
"license:apache-2.0",
"region:us"
] | null | 2024-03-10T23:37:43Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: google/flan-t5-small
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.9.1.dev0
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
NorGLM/NorGPT-3B-NO-BoolQ-peft | NorGLM | 2024-03-10T23:32:29Z | 0 | 0 | null | [
"no",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2024-03-10T23:30:35Z | ---
license: cc-by-nc-sa-4.0
language:
- 'no'
---
# Model Card
NorGPT-3B-NO-BoolQ-peft is trained on top of [NorGPT-3B](https://huggingface.co/NorGLM/NorGPT-3B) model on [NO-BoolQ](https://huggingface.co/datasets/NorGLM/NO-BoolQ) dataset.
Data format:
```
input: {passage}[SEP]{question}
label: {True, False} -> {1,0}
```
## Run the Model
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
torch_device = "cuda" if torch.cuda.is_available() else "cpu"
source_model_id = "NorGLM/NorGPT-3B"
peft_model_id = "NorGLM/NorGPT-3B-NO-BoolQ-peft"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(source_model_id, device_map='balanced')
tokenizer_max_len = 2048
tokenizer_config = {'pretrained_model_name_or_path': source_model_id,
'max_len': tokenizer_max_len}
tokenizer = tokenizer = AutoTokenizer.from_pretrained(**tokenizer_config)
tokenizer.pad_token = tokenizer.eos_token
model = PeftModel.from_pretrained(model, peft_model_id)
```
## Inference Example
Load the model to evaluate on the validation set:
```python
def getDataSetFromFiles(df):
# convert dataset
df["text"] = df[["passage", "question"]].apply(lambda x: " [SEP] ".join(x.astype(str)), axis =1)
df = df.drop(["idx", "passage", "question"], axis=1)
#df['label'] = df['label'].replace({1:'contradiction', -1:'entailment', 0:'neutral'})
df["label"] = df.label.map({True: 1, False: 0})
return Dataset.from_pandas(df)
print("--LOADING EVAL DATAS---")
eval_data = load_dataset("NorGLM/NO-BoolQ", data_files="val.jsonl")
eval_data = getDataSetFromFiles(eval_data["train"].to_pandas())
print("--MAKING PREDICTIONS---")
model.eval()
y_true = []
y_pred = []
count = 0
for data in eval_data:
count = count + 1
if count % 100 == 0:
print(count)
inputs = tokenizer(data['text'], return_tensors="pt").to(torch_device)
with torch.no_grad():
logits = model(**inputs).logits
#print(logits)
predicted_class_id = logits.argmax().item()
y_true.append(data['label'])
y_pred.append(predicted_class_id)
print(y_pred)
print(f"Lenght of true_values: {len(y_true)}")
print(f"Lenght of predicted_values: {len(y_pred)}")
y_true = np.array(y_true)
y_pred = np.array(y_pred)
F_score = f1_score(y_true, y_pred, average="macro")
print(f"F1 score: {F_score}")
accuracy = accuracy_score(y_true, y_pred)
print(f"Accuracy: {accuracy}")
```
## Note
More training details will be released soon! |
grace-pro/one_half_data_high_rank_even_more_params | grace-pro | 2024-03-10T23:29:10Z | 1 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-03-10T23:26:59Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
metrics:
- precision
- recall
- accuracy
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: one_half_data_high_rank_even_more_params
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# one_half_data_high_rank_even_more_params
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7115
- Precision: 0.8275
- Recall: 0.9492
- F1-score: 0.8842
- Accuracy: 0.8394
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1-score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:--------:|:--------:|
| 0.5522 | 1.0 | 24544 | 0.7115 | 0.8275 | 0.9492 | 0.8842 | 0.8394 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
alinerodrigues/wav2vec2-large-xlsr-mecita-coraa-portuguese-all-grade-2-5 | alinerodrigues | 2024-03-10T23:25:48Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-03-10T19:49:47Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-large-xlsr-mecita-coraa-portuguese-all-grade-2-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-mecita-coraa-portuguese-all-grade-2-5
This model is a fine-tuned version of [Edresson/wav2vec2-large-xlsr-coraa-portuguese](https://huggingface.co/Edresson/wav2vec2-large-xlsr-coraa-portuguese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1888
- Wer: 0.1049
- Cer: 0.0343
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 31.7094 | 1.0 | 58 | 3.6328 | 1.0 | 1.0 |
| 7.717 | 2.0 | 116 | 3.1503 | 1.0 | 1.0 |
| 7.717 | 3.0 | 174 | 3.0149 | 1.0 | 1.0 |
| 3.0503 | 4.0 | 232 | 2.9605 | 1.0 | 1.0 |
| 3.0503 | 5.0 | 290 | 2.9212 | 1.0 | 1.0 |
| 2.9265 | 6.0 | 348 | 2.8995 | 1.0 | 1.0 |
| 2.8799 | 7.0 | 406 | 2.6417 | 1.0 | 1.0 |
| 2.8799 | 8.0 | 464 | 1.3314 | 0.9950 | 0.2921 |
| 2.0484 | 9.0 | 522 | 0.7075 | 0.3918 | 0.1000 |
| 2.0484 | 10.0 | 580 | 0.5138 | 0.2276 | 0.0664 |
| 0.9682 | 11.0 | 638 | 0.4169 | 0.2071 | 0.0596 |
| 0.9682 | 12.0 | 696 | 0.3580 | 0.1835 | 0.0530 |
| 0.6198 | 13.0 | 754 | 0.3281 | 0.1719 | 0.0513 |
| 0.529 | 14.0 | 812 | 0.3166 | 0.1692 | 0.0502 |
| 0.529 | 15.0 | 870 | 0.2954 | 0.1595 | 0.0483 |
| 0.445 | 16.0 | 928 | 0.2783 | 0.1502 | 0.0453 |
| 0.445 | 17.0 | 986 | 0.2721 | 0.1452 | 0.0445 |
| 0.3943 | 18.0 | 1044 | 0.2537 | 0.1390 | 0.0415 |
| 0.3798 | 19.0 | 1102 | 0.2567 | 0.1332 | 0.0416 |
| 0.3798 | 20.0 | 1160 | 0.2434 | 0.1196 | 0.0388 |
| 0.3459 | 21.0 | 1218 | 0.2421 | 0.1181 | 0.0384 |
| 0.3459 | 22.0 | 1276 | 0.2252 | 0.1150 | 0.0365 |
| 0.3187 | 23.0 | 1334 | 0.2331 | 0.1146 | 0.0368 |
| 0.3187 | 24.0 | 1392 | 0.2195 | 0.1181 | 0.0371 |
| 0.2982 | 25.0 | 1450 | 0.2180 | 0.1181 | 0.0375 |
| 0.2874 | 26.0 | 1508 | 0.2181 | 0.1069 | 0.0355 |
| 0.2874 | 27.0 | 1566 | 0.2159 | 0.1099 | 0.0360 |
| 0.2542 | 28.0 | 1624 | 0.2173 | 0.1161 | 0.0380 |
| 0.2542 | 29.0 | 1682 | 0.2127 | 0.1080 | 0.0358 |
| 0.2663 | 30.0 | 1740 | 0.2112 | 0.1158 | 0.0372 |
| 0.2663 | 31.0 | 1798 | 0.2114 | 0.1130 | 0.0364 |
| 0.2371 | 32.0 | 1856 | 0.2052 | 0.1092 | 0.0359 |
| 0.2348 | 33.0 | 1914 | 0.2044 | 0.1061 | 0.0346 |
| 0.2348 | 34.0 | 1972 | 0.2067 | 0.1072 | 0.0344 |
| 0.2368 | 35.0 | 2030 | 0.2023 | 0.1099 | 0.0350 |
| 0.2368 | 36.0 | 2088 | 0.1992 | 0.1049 | 0.0353 |
| 0.217 | 37.0 | 2146 | 0.1972 | 0.1076 | 0.0354 |
| 0.234 | 38.0 | 2204 | 0.1938 | 0.1076 | 0.0347 |
| 0.234 | 39.0 | 2262 | 0.1982 | 0.1069 | 0.0348 |
| 0.1979 | 40.0 | 2320 | 0.1945 | 0.1061 | 0.0346 |
| 0.1979 | 41.0 | 2378 | 0.2003 | 0.1069 | 0.0353 |
| 0.2062 | 42.0 | 2436 | 0.1970 | 0.1053 | 0.0350 |
| 0.2062 | 43.0 | 2494 | 0.1984 | 0.1007 | 0.0341 |
| 0.2011 | 44.0 | 2552 | 0.1992 | 0.1072 | 0.0343 |
| 0.1807 | 45.0 | 2610 | 0.1962 | 0.1084 | 0.0342 |
| 0.1807 | 46.0 | 2668 | 0.1958 | 0.1030 | 0.0334 |
| 0.1982 | 47.0 | 2726 | 0.1928 | 0.1038 | 0.0340 |
| 0.1982 | 48.0 | 2784 | 0.1961 | 0.1053 | 0.0344 |
| 0.1948 | 49.0 | 2842 | 0.1939 | 0.1049 | 0.0336 |
| 0.1777 | 50.0 | 2900 | 0.1888 | 0.1049 | 0.0343 |
| 0.1777 | 51.0 | 2958 | 0.1930 | 0.1026 | 0.0336 |
| 0.1655 | 52.0 | 3016 | 0.1900 | 0.1018 | 0.0333 |
| 0.1655 | 53.0 | 3074 | 0.1950 | 0.1034 | 0.0331 |
| 0.1805 | 54.0 | 3132 | 0.1946 | 0.1045 | 0.0340 |
| 0.1805 | 55.0 | 3190 | 0.1959 | 0.1030 | 0.0337 |
| 0.1829 | 56.0 | 3248 | 0.1933 | 0.0987 | 0.0325 |
| 0.1621 | 57.0 | 3306 | 0.1908 | 0.0976 | 0.0325 |
| 0.1621 | 58.0 | 3364 | 0.1892 | 0.1010 | 0.0331 |
| 0.1702 | 59.0 | 3422 | 0.1907 | 0.0995 | 0.0322 |
| 0.1702 | 60.0 | 3480 | 0.1934 | 0.1003 | 0.0326 |
| 0.1652 | 61.0 | 3538 | 0.1959 | 0.0987 | 0.0328 |
| 0.1652 | 62.0 | 3596 | 0.1961 | 0.0976 | 0.0323 |
| 0.1567 | 63.0 | 3654 | 0.1927 | 0.0991 | 0.0330 |
| 0.1496 | 64.0 | 3712 | 0.1912 | 0.0983 | 0.0327 |
| 0.1496 | 65.0 | 3770 | 0.1963 | 0.1007 | 0.0330 |
| 0.1672 | 66.0 | 3828 | 0.1958 | 0.0999 | 0.0328 |
| 0.1672 | 67.0 | 3886 | 0.1962 | 0.0987 | 0.0328 |
| 0.141 | 68.0 | 3944 | 0.1957 | 0.0964 | 0.0320 |
| 0.144 | 69.0 | 4002 | 0.1942 | 0.0949 | 0.0316 |
| 0.144 | 70.0 | 4060 | 0.1931 | 0.0995 | 0.0331 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.2.1+cu121
- Datasets 2.17.0
- Tokenizers 0.13.3
|
keskin-oguzhan/phi2-squadv2-merged | keskin-oguzhan | 2024-03-10T23:23:48Z | 8 | 1 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"keskin-oguzhan/phi2-squadv2",
"custom_code",
"base_model:keskin-oguzhan/phi2-squadv2",
"base_model:finetune:keskin-oguzhan/phi2-squadv2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T23:16:55Z | ---
tags:
- merge
- mergekit
- lazymergekit
- keskin-oguzhan/phi2-squadv2
- keskin-oguzhan/phi2-squadv2
- keskin-oguzhan/phi2-squadv2
- keskin-oguzhan/phi2-squadv2
- keskin-oguzhan/phi2-squadv2
- keskin-oguzhan/phi2-squadv2
- keskin-oguzhan/phi2-squadv2
base_model:
- keskin-oguzhan/phi2-squadv2
- keskin-oguzhan/phi2-squadv2
- keskin-oguzhan/phi2-squadv2
- keskin-oguzhan/phi2-squadv2
- keskin-oguzhan/phi2-squadv2
- keskin-oguzhan/phi2-squadv2
- keskin-oguzhan/phi2-squadv2
---
# phi2-squadv2-merged
phi2-squadv2-merged is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [keskin-oguzhan/phi2-squadv2](https://huggingface.co/keskin-oguzhan/phi2-squadv2)
* [keskin-oguzhan/phi2-squadv2](https://huggingface.co/keskin-oguzhan/phi2-squadv2)
* [keskin-oguzhan/phi2-squadv2](https://huggingface.co/keskin-oguzhan/phi2-squadv2)
* [keskin-oguzhan/phi2-squadv2](https://huggingface.co/keskin-oguzhan/phi2-squadv2)
* [keskin-oguzhan/phi2-squadv2](https://huggingface.co/keskin-oguzhan/phi2-squadv2)
* [keskin-oguzhan/phi2-squadv2](https://huggingface.co/keskin-oguzhan/phi2-squadv2)
* [keskin-oguzhan/phi2-squadv2](https://huggingface.co/keskin-oguzhan/phi2-squadv2)
## 🧩 Configuration
```yaml
dtype: bfloat16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 8]
model: keskin-oguzhan/phi2-squadv2
- sources:
- layer_range: [4, 12]
model: keskin-oguzhan/phi2-squadv2
- sources:
- layer_range: [8, 16]
model: keskin-oguzhan/phi2-squadv2
- sources:
- layer_range: [12, 20]
model: keskin-oguzhan/phi2-squadv2
- sources:
- layer_range: [16, 24]
model: keskin-oguzhan/phi2-squadv2
- sources:
- layer_range: [20, 28]
model: keskin-oguzhan/phi2-squadv2
- sources:
- layer_range: [24, 32]
model: keskin-oguzhan/phi2-squadv2
``` |
SjardiWillems/distilbert-base-uncased-finetuned-sentiment | SjardiWillems | 2024-03-10T23:11:09Z | 91 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-10T19:05:47Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sentiment
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2806
- Accuracy: 0.8807
- F1: 0.8807
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4144 | 1.0 | 109 | 0.2891 | 0.875 | 0.8749 |
| 0.2441 | 2.0 | 218 | 0.2806 | 0.8807 | 0.8807 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
hamzasidat/BertIronyResults3 | hamzasidat | 2024-03-10T23:09:55Z | 179 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-10T23:09:12Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: BertIronyResults3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BertIronyResults3
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5868
- Accuracy: 0.6932
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 179 | 0.5868 | 0.6932 |
| No log | 2.0 | 358 | 0.6104 | 0.6869 |
| 0.4907 | 3.0 | 537 | 0.6448 | 0.7026 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
hamzasidat/Hamzas_assignment1_Albert2 | hamzasidat | 2024-03-10T23:05:49Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-10T23:05:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hamzasidat/AlbertResults2 | hamzasidat | 2024-03-10T23:05:46Z | 177 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"albert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:albert/albert-base-v2",
"base_model:finetune:albert/albert-base-v2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-10T23:05:41Z | ---
license: apache-2.0
base_model: albert-base-v2
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: AlbertResults2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.931
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AlbertResults2
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1619
- Accuracy: 0.931
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3348 | 1.0 | 1000 | 0.2663 | 0.9075 |
| 0.1566 | 2.0 | 2000 | 0.1619 | 0.931 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
saintazunya/outputs-dreambooth-sdxl-kanade | saintazunya | 2024-03-10T23:03:06Z | 2 | 2 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-03-10T22:17:56Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of skskanadetachibana figure
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - saintazunya/outputs-dreambooth-sdxl-kanade
<Gallery />
## Model description
These are saintazunya/outputs-dreambooth-sdxl-kanade LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of skskanadetachibana figure to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](saintazunya/outputs-dreambooth-sdxl-kanade/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
hamzasidat/Hamzas_assignment1_Bert2 | hamzasidat | 2024-03-10T23:02:02Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-10T23:02:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hamzasidat/BertResults2 | hamzasidat | 2024-03-10T23:02:00Z | 177 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-10T23:01:39Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: BertResults2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.94
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BertResults2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1487
- Accuracy: 0.94
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2236 | 1.0 | 1000 | 0.1929 | 0.924 |
| 0.1179 | 2.0 | 2000 | 0.1487 | 0.94 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
arsruts/distilbert-base-uncased-finetuned-cola | arsruts | 2024-03-10T22:54:08Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-08T13:37:09Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8855
- Matthews Correlation: 0.5339
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5155 | 1.0 | 535 | 0.4625 | 0.4354 |
| 0.3412 | 2.0 | 1070 | 0.4636 | 0.5212 |
| 0.2297 | 3.0 | 1605 | 0.6616 | 0.5111 |
| 0.1737 | 4.0 | 2140 | 0.8490 | 0.5265 |
| 0.1228 | 5.0 | 2675 | 0.8855 | 0.5339 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
thomasolav/distilbert-base-uncased-finetuned-sst2 | thomasolav | 2024-03-10T22:53:34Z | 14 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-10T22:20:03Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-sst2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2908
- Accuracy: 0.9060
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1893 | 1.0 | 4210 | 0.2908 | 0.9060 |
| 0.1403 | 2.0 | 8420 | 0.4215 | 0.8899 |
| 0.0891 | 3.0 | 12630 | 0.4039 | 0.9025 |
| 0.0667 | 4.0 | 16840 | 0.4441 | 0.9014 |
| 0.0378 | 5.0 | 21050 | 0.5482 | 0.9002 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
SjardiWillems/distilbert-base-uncased-finetuned-stsb | SjardiWillems | 2024-03-10T22:47:48Z | 23 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:SjardiWillems/distilbert-base-uncased-finetuned-stsb",
"base_model:finetune:SjardiWillems/distilbert-base-uncased-finetuned-stsb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-05T21:52:02Z | ---
license: apache-2.0
base_model: SjardiWillems/distilbert-base-uncased-finetuned-stsb
tags:
- generated_from_trainer
metrics:
- spearmanr
model-index:
- name: distilbert-base-uncased-finetuned-stsb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-stsb
This model is a fine-tuned version of [SjardiWillems/distilbert-base-uncased-finetuned-stsb](https://huggingface.co/SjardiWillems/distilbert-base-uncased-finetuned-stsb) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5419
- Pearson: 0.8736
- Spearmanr: 0.8702
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3.1992432473500055e-06
- train_batch_size: 64
- eval_batch_size: 16
- seed: 25
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|
| No log | 1.0 | 90 | 0.5404 | 0.8727 | 0.8690 |
| No log | 2.0 | 180 | 0.5394 | 0.8736 | 0.8701 |
| No log | 3.0 | 270 | 0.5394 | 0.8738 | 0.8703 |
| No log | 4.0 | 360 | 0.5419 | 0.8736 | 0.8702 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Litzy619/V0309P6 | Litzy619 | 2024-03-10T22:45:47Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-03-10T07:39:51Z | ---
license: mit
base_model: microsoft/phi-2
tags:
- generated_from_trainer
model-index:
- name: V0309P6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0309P6
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0648
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 20
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.969 | 0.09 | 10 | 0.5527 |
| 0.2118 | 0.17 | 20 | 0.0895 |
| 0.1076 | 0.26 | 30 | 0.0750 |
| 0.0998 | 0.34 | 40 | 0.0690 |
| 0.0936 | 0.43 | 50 | 0.0643 |
| 0.0846 | 0.51 | 60 | 0.0642 |
| 0.0784 | 0.6 | 70 | 0.0639 |
| 0.0857 | 0.68 | 80 | 0.0668 |
| 0.0748 | 0.77 | 90 | 0.0641 |
| 0.111 | 0.85 | 100 | 0.0680 |
| 0.0874 | 0.94 | 110 | 0.0704 |
| 0.0842 | 1.02 | 120 | 0.0675 |
| 0.0797 | 1.11 | 130 | 0.0678 |
| 0.0731 | 1.19 | 140 | 0.0642 |
| 0.0714 | 1.28 | 150 | 0.0584 |
| 0.0709 | 1.37 | 160 | 0.0621 |
| 0.0703 | 1.45 | 170 | 0.0587 |
| 0.0638 | 1.54 | 180 | 0.0595 |
| 0.0678 | 1.62 | 190 | 0.0580 |
| 0.067 | 1.71 | 200 | 0.0600 |
| 0.0672 | 1.79 | 210 | 0.0604 |
| 0.0627 | 1.88 | 220 | 0.0640 |
| 0.0587 | 1.96 | 230 | 0.0592 |
| 0.057 | 2.05 | 240 | 0.0622 |
| 0.0486 | 2.13 | 250 | 0.0663 |
| 0.0484 | 2.22 | 260 | 0.0690 |
| 0.0457 | 2.3 | 270 | 0.0677 |
| 0.0529 | 2.39 | 280 | 0.0636 |
| 0.0533 | 2.47 | 290 | 0.0622 |
| 0.0523 | 2.56 | 300 | 0.0627 |
| 0.0523 | 2.65 | 310 | 0.0638 |
| 0.0456 | 2.73 | 320 | 0.0642 |
| 0.048 | 2.82 | 330 | 0.0648 |
| 0.0454 | 2.9 | 340 | 0.0642 |
| 0.0491 | 2.99 | 350 | 0.0648 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Litzy619/V0309P4 | Litzy619 | 2024-03-10T22:45:17Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-03-10T07:37:44Z | ---
license: mit
base_model: microsoft/phi-2
tags:
- generated_from_trainer
model-index:
- name: V0309P4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0309P4
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0689
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 20
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1886 | 0.09 | 10 | 0.9747 |
| 0.3651 | 0.17 | 20 | 0.0977 |
| 0.1129 | 0.26 | 30 | 0.0765 |
| 0.0955 | 0.34 | 40 | 0.0707 |
| 0.0894 | 0.43 | 50 | 0.0684 |
| 0.083 | 0.51 | 60 | 0.0679 |
| 0.0762 | 0.6 | 70 | 0.0688 |
| 0.0807 | 0.68 | 80 | 0.0672 |
| 0.0699 | 0.77 | 90 | 0.0735 |
| 0.0699 | 0.85 | 100 | 0.0735 |
| 0.0757 | 0.94 | 110 | 0.0663 |
| 0.0726 | 1.02 | 120 | 0.0632 |
| 0.0641 | 1.11 | 130 | 0.0692 |
| 0.0627 | 1.19 | 140 | 0.0625 |
| 0.0579 | 1.28 | 150 | 0.0625 |
| 0.0579 | 1.37 | 160 | 0.0682 |
| 0.0564 | 1.45 | 170 | 0.0642 |
| 0.0544 | 1.54 | 180 | 0.0651 |
| 0.0565 | 1.62 | 190 | 0.0623 |
| 0.057 | 1.71 | 200 | 0.0605 |
| 0.0589 | 1.79 | 210 | 0.0602 |
| 0.0538 | 1.88 | 220 | 0.0659 |
| 0.0528 | 1.96 | 230 | 0.0623 |
| 0.0482 | 2.05 | 240 | 0.0640 |
| 0.0396 | 2.13 | 250 | 0.0693 |
| 0.0398 | 2.22 | 260 | 0.0753 |
| 0.0372 | 2.3 | 270 | 0.0771 |
| 0.0463 | 2.39 | 280 | 0.0707 |
| 0.0447 | 2.47 | 290 | 0.0676 |
| 0.0429 | 2.56 | 300 | 0.0672 |
| 0.0454 | 2.65 | 310 | 0.0670 |
| 0.0377 | 2.73 | 320 | 0.0678 |
| 0.0387 | 2.82 | 330 | 0.0690 |
| 0.0394 | 2.9 | 340 | 0.0690 |
| 0.0414 | 2.99 | 350 | 0.0689 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
NorGLM/NorLlama-3B-Instruction-peft | NorGLM | 2024-03-10T22:42:28Z | 0 | 0 | null | [
"no",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2024-03-10T22:40:42Z | ---
license: cc-by-nc-sa-4.0
language:
- 'no'
---
# Model Card
NorLlama-3B-Instruction-peft is trained on top of [NorLlama-3B](https://huggingface.co/NorGLM/NorLlama-3B) model on [NO-Alpaca](https://huggingface.co/datasets/NbAiLab/norwegian-alpaca) dataset.
Prompt format:
```
{instruction} {input} : {output}
```
Inference prompt:
```
{instruction} {input} :
```
## Run the Model
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
source_model_id = "NorGLM/NorLlama-3B"
peft_model_id = "NorGLM/NorLlama-3B-Instruction-peft"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(source_model_id, device_map='balanced')
tokenizer_max_len = 2048
tokenizer_config = {'pretrained_model_name_or_path': source_model_id,
'max_len': tokenizer_max_len}
tokenizer = tokenizer = AutoTokenizer.from_pretrained(**tokenizer_config)
tokenizer.pad_token = tokenizer.eos_token
model = PeftModel.from_pretrained(model, peft_model_id)
```
## Inference Example
Load the model to evaluate on the last 20\% of NO-Alpaca dataset:
```python
def merge_columns(example):
if str(example["input"]) == "":
example["text"] = str(example["instruction"]) + " : "
else:
example["text"] = str(example["instruction"]) + " " + str(example["input"]) + " : "
return example
def generate_text(text, max_length=200, do_sample=True, top_p = 0.92, top_k=0):
set_seed(42)
model_inputs = tokenizer(text, return_tensors='pt').to(torch_device)
output = model.generate(**model_inputs, max_new_tokens = max_length, no_repeat_ngram_size=2, pad_token_id=tokenizer.eos_token_id)
return tokenizer.decode(output[0], skip_special_tokens=True)
print("--LOADING EVAL DATAS---")
eval_data = load_dataset("NbAiLab/norwegian-alpaca", split='train[-20%:]')
print("--MAKING PREDICTIONS---")
model.eval()
output_file = <output file name>
with open(output_file, 'w', encoding='utf-8-sig') as file:
generated_text = []
for question in eval_data['text']:
generated_text.append({"generated_text": generate_text(question)})
print({"text_generated": len(generated_text)})
json_lines = [json.dumps(data) for data in generated_text]
json_data = "\n".join(json_lines)
file.write(json_data)
```
## Note
More training details will be released soon! |
ThuyNT03/CS505_MvPCOQE_viT5_Prompting5_top1 | ThuyNT03 | 2024-03-10T22:41:01Z | 93 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"base_model:finetune:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-03-10T16:30:59Z | ---
license: mit
base_model: VietAI/vit5-large
tags:
- generated_from_trainer
model-index:
- name: CS505_MvPCOQE_viT5_Prompting5_top1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505_MvPCOQE_viT5_Prompting5_top1
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.2
|
NorGLM/NorGPT-3B-continue-Instruction-peft | NorGLM | 2024-03-10T22:38:50Z | 0 | 0 | null | [
"no",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2024-03-10T22:34:53Z | ---
license: cc-by-nc-sa-4.0
language:
- 'no'
---
# Model Card
NorGPT-3B-continue-Instruction-peft is trained on top of [NorGPT-3B-continue](https://huggingface.co/NorGLM/NorGPT-3B-continue) model on [NO-Alpaca](https://huggingface.co/datasets/NbAiLab/norwegian-alpaca) dataset.
Prompt format:
```
{instruction} {input} : {output}
```
Inference prompt:
```
{instruction} {input} :
```
## Run the Model
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
source_model_id = "NorGLM/NorGPT-3B-continue"
peft_model_id = "NorGLM/NorGPT-3B-continue-Instruction-peft"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(source_model_id, device_map='balanced')
tokenizer_max_len = 2048
tokenizer_config = {'pretrained_model_name_or_path': source_model_id,
'max_len': tokenizer_max_len}
tokenizer = tokenizer = AutoTokenizer.from_pretrained(**tokenizer_config)
tokenizer.pad_token = tokenizer.eos_token
model = PeftModel.from_pretrained(model, peft_model_id)
```
## Inference Example
Load the model to evaluate on the last 20\% of NO-Alpaca dataset:
```python
def merge_columns(example):
if str(example["input"]) == "":
example["text"] = str(example["instruction"]) + " : "
else:
example["text"] = str(example["instruction"]) + " " + str(example["input"]) + " : "
return example
def generate_text(text, max_length=200, do_sample=True, top_p = 0.92, top_k=0):
set_seed(42)
model_inputs = tokenizer(text, return_tensors='pt').to(torch_device)
output = model.generate(**model_inputs, max_new_tokens = max_length, no_repeat_ngram_size=2, pad_token_id=tokenizer.eos_token_id)
return tokenizer.decode(output[0], skip_special_tokens=True)
print("--LOADING EVAL DATAS---")
eval_data = load_dataset("NbAiLab/norwegian-alpaca", split='train[-20%:]')
print("--MAKING PREDICTIONS---")
model.eval()
output_file = <output file name>
with open(output_file, 'w', encoding='utf-8-sig') as file:
generated_text = []
for question in eval_data['text']:
generated_text.append({"generated_text": generate_text(question)})
print({"text_generated": len(generated_text)})
json_lines = [json.dumps(data) for data in generated_text]
json_data = "\n".join(json_lines)
file.write(json_data)
```
## Note
More training details will be released soon! |
Jackline/Blip2-HateSpeech-PEFT-LLM-2.7b | Jackline | 2024-03-10T22:37:03Z | 3 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:Salesforce/blip2-opt-2.7b",
"base_model:adapter:Salesforce/blip2-opt-2.7b",
"region:us"
] | null | 2024-03-10T20:32:22Z | ---
library_name: peft
base_model: Salesforce/blip2-opt-2.7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.1
|
hamzasidat/Hamzas_Distilbert_Irony3 | hamzasidat | 2024-03-10T22:33:11Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-10T22:33:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hamzasidat/DistilbertIronyResults3 | hamzasidat | 2024-03-10T22:33:08Z | 176 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-10T22:32:41Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: DistilbertIronyResults3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilbertIronyResults3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6026
- Accuracy: 0.6806
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 179 | 0.6294 | 0.6147 |
| No log | 2.0 | 358 | 0.6026 | 0.6806 |
| 0.5319 | 3.0 | 537 | 0.6334 | 0.6817 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
NorGLM/NorGPT-369M-Instruction-peft | NorGLM | 2024-03-10T22:32:31Z | 0 | 0 | null | [
"no",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2024-03-10T19:59:31Z | ---
license: cc-by-nc-sa-4.0
language:
- 'no'
---
# Model Card
NorGPT-369M-Instruction-peft is trained on top of [NorGPT-369M](https://huggingface.co/NorGLM/NorGPT-369M) model on [NO-Alpaca](https://huggingface.co/datasets/NbAiLab/norwegian-alpaca) dataset.
Prompt format:
```
{instruction} {input} : {output}
```
Inference prompt:
```
{instruction} {input} :
```
## Run the Model
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
source_model_id = "NorGLM/NorGPT-369M"
peft_model_id = "NorGLM/NorGPT-369M-Instruction-peft"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(source_model_id, device_map='balanced')
tokenizer_max_len = 2048
tokenizer_config = {'pretrained_model_name_or_path': source_model_id,
'max_len': tokenizer_max_len}
tokenizer = tokenizer = AutoTokenizer.from_pretrained(**tokenizer_config)
tokenizer.pad_token = tokenizer.eos_token
model = PeftModel.from_pretrained(model, peft_model_id)
```
## Inference Example
Load the model to evaluate on the last 20\% of NO-Alpaca dataset:
```python
def merge_columns(example):
if str(example["input"]) == "":
example["text"] = str(example["instruction"]) + " : "
else:
example["text"] = str(example["instruction"]) + " " + str(example["input"]) + " : "
return example
def generate_text(text, max_length=200, do_sample=True, top_p = 0.92, top_k=0):
set_seed(42)
model_inputs = tokenizer(text, return_tensors='pt').to(torch_device)
output = model.generate(**model_inputs, max_new_tokens = max_length, no_repeat_ngram_size=2, pad_token_id=tokenizer.eos_token_id)
return tokenizer.decode(output[0], skip_special_tokens=True)
print("--LOADING EVAL DATAS---")
eval_data = load_dataset("NbAiLab/norwegian-alpaca", split='train[-20%:]')
print("--MAKING PREDICTIONS---")
model.eval()
output_file = <output file name>
with open(output_file, 'w', encoding='utf-8-sig') as file:
generated_text = []
for question in eval_data['text']:
generated_text.append({"generated_text": generate_text(question)})
print({"text_generated": len(generated_text)})
json_lines = [json.dumps(data) for data in generated_text]
json_data = "\n".join(json_lines)
file.write(json_data)
```
## Note
More training details will be released soon!
|
EarthnDusk/Lora_Extractions | EarthnDusk | 2024-03-10T22:31:59Z | 0 | 1 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-03-06T04:21:26Z | ---
license: creativeml-openrail-m
---
Lora extractions via Bmaltais/Kohya SS
---
These are extractions of models we have existing, feel free to mooch there should be no activation tag.
these are 128x128 dim/alpha for 1.5 - but SplatterpunkALpha is SDXL and is 32/16.
Feel free WITH CREDIT if possible to merge back into your own content.
SD 1.5 versions DIDNT TURN OUT, unless we tested them wrong.
Splatterpunk is an XL one. |
numen-tech/TinyLlama-1.1B-Chat-v1.0-w4a16g128asym | numen-tech | 2024-03-10T22:21:47Z | 0 | 0 | null | [
"arxiv:2308.13137",
"license:apache-2.0",
"region:us"
] | null | 2024-03-10T22:17:19Z | ---
license: apache-2.0
---
4-bit [OmniQuant](https://arxiv.org/abs/2308.13137) quantized version of [TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0).
|
thomasolav/distilbert-base-uncased-finetuned-cola | thomasolav | 2024-03-10T22:11:32Z | 14 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-10T21:56:27Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8412
- Matthews Correlation: 0.5340
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5208 | 1.0 | 535 | 0.4576 | 0.4452 |
| 0.3435 | 2.0 | 1070 | 0.4613 | 0.5168 |
| 0.2338 | 3.0 | 1605 | 0.6399 | 0.5195 |
| 0.1753 | 4.0 | 2140 | 0.8412 | 0.5340 |
| 0.1295 | 5.0 | 2675 | 0.8539 | 0.5305 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
grace-pro/one_half_data_high_rank_v2 | grace-pro | 2024-03-10T21:55:21Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-03-10T21:53:43Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
metrics:
- precision
- recall
- accuracy
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: one_half_data_high_rank_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# one_half_data_high_rank_v2
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6564
- Precision: 0.8403
- Recall: 0.9383
- F1-score: 0.8866
- Accuracy: 0.8450
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1-score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:--------:|:--------:|
| 0.535 | 1.0 | 24544 | 0.6564 | 0.8403 | 0.9383 | 0.8866 | 0.8450 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
AwAppp/benchmarks_4bit_batch_size45 | AwAppp | 2024-03-10T21:49:33Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-10T21:49:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
m7n/dierenleven-sdxl-lora-001 | m7n | 2024-03-10T21:48:51Z | 3 | 1 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"diffusers-training",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-03-10T17:23:19Z | ---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- diffusers-training
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: 'litograph in the style of <s0><s1>, showing a beautiful bird of paradise'
output:
url:
"image_0.png"
- text: 'litograph in the style of <s0><s1>, showing a beautiful bird of paradise'
output:
url:
"image_1.png"
- text: 'litograph in the style of <s0><s1>, showing a beautiful bird of paradise'
output:
url:
"image_2.png"
- text: 'litograph in the style of <s0><s1>, showing a beautiful bird of paradise'
output:
url:
"image_3.png"
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: litograph in the style of <s0><s1>, showing a beautiful bird of paradise
license: openrail++
---
# SDXL LoRA DreamBooth - m7n/dierenleven-sdxl-lora-001
<Gallery />
## Model description
### These are m7n/dierenleven-sdxl-lora-001 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`dierenleven-sdxl-lora-001.safetensors` here 💾](/m7n/dierenleven-sdxl-lora-001/blob/main/dierenleven-sdxl-lora-001.safetensors)**.
- Place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:dierenleven-sdxl-lora-001:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
- *Embeddings*: download **[`dierenleven-sdxl-lora-001_emb.safetensors` here 💾](/m7n/dierenleven-sdxl-lora-001/blob/main/dierenleven-sdxl-lora-001_emb.safetensors)**.
- Place it on it on your `embeddings` folder
- Use it by adding `dierenleven-sdxl-lora-001_emb` to your prompt. For example, `litograph in the style of dierenleven-sdxl-lora-001_emb, showing a beautiful bird of paradise`
(you need both the LoRA and the embeddings as they were trained together for this LoRA)
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('m7n/dierenleven-sdxl-lora-001', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='m7n/dierenleven-sdxl-lora-001', filename='dierenleven-sdxl-lora-001_emb.safetensors', repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
image = pipeline('litograph in the style of <s0><s1>, showing a beautiful bird of paradise').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` → use `<s0><s1>` in your prompt
## Details
All [Files & versions](/m7n/dierenleven-sdxl-lora-001/tree/main).
The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
AwAppp/benchmarks_4bit_batch_size40 | AwAppp | 2024-03-10T21:48:00Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-10T21:48:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
migtissera/Tess-72B-v1.5b | migtissera | 2024-03-10T21:46:57Z | 47 | 15 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:other",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-08T17:34:38Z | ---
license: other
license_name: qwen-72b-licence
license_link: https://huggingface.co/Qwen/Qwen-72B/blob/main/LICENSE
model-index:
- name: Tess-72B-v1.5b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 71.25
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=migtissera/Tess-72B-v1.5b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.53
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=migtissera/Tess-72B-v1.5b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 76.63
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=migtissera/Tess-72B-v1.5b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 71.99
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=migtissera/Tess-72B-v1.5b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.45
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=migtissera/Tess-72B-v1.5b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 76.95
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=migtissera/Tess-72B-v1.5b
name: Open LLM Leaderboard
---
<br>

<br>
Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-72B-v1.5b was trained on the Qwen-72B base.
# Prompt Format:
```
SYSTEM: <ANY SYSTEM CONTEXT>
USER:
ASSISTANT:
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_migtissera__Tess-72B-v1.5b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |77.30|
|AI2 Reasoning Challenge (25-Shot)|71.25|
|HellaSwag (10-Shot) |85.53|
|MMLU (5-Shot) |76.63|
|TruthfulQA (0-shot) |71.99|
|Winogrande (5-shot) |81.45|
|GSM8k (5-shot) |76.95|
|
AwAppp/benchmarks_4bit_batch_size25 | AwAppp | 2024-03-10T21:43:17Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-10T21:43:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AwAppp/benchmarks_4bit_batch_size20 | AwAppp | 2024-03-10T21:41:42Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-10T21:41:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
myownip/axolotl-openllama-1k-qlora-v02 | myownip | 2024-03-10T21:40:07Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"llama",
"generated_from_trainer",
"base_model:openlm-research/open_llama_3b_v2",
"base_model:adapter:openlm-research/open_llama_3b_v2",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-03-10T21:40:02Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: openlm-research/open_llama_3b_v2
model-index:
- name: qlora-out
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: openlm-research/open_llama_3b_v2
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
push_dataset_to_hub:
datasets:
- path: mhenrichsen/alpaca_2k_test
type: alpaca
dataset_prepared_path:
val_set_size: 0.05
adapter: qlora
lora_model_dir:
sequence_len: 1024
sample_packing: true
lora_r: 8
lora_alpha: 32
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
output_dir: ./qlora-out
gradient_accumulation_steps: 1
micro_batch_size: 1
num_epochs: 1
optimizer: paged_adamw_32bit
torchdistx_path:
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: false
fp16: true
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
gptq_groupsize:
gptq_model_v1:
warmup_steps: 20
evals_per_epoch: 4
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.1
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```
</details><br>
# qlora-out
This model is a fine-tuned version of [openlm-research/open_llama_3b_v2](https://huggingface.co/openlm-research/open_llama_3b_v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1118
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2567 | 0.0 | 1 | 1.3470 |
| 1.1738 | 0.25 | 108 | 1.1365 |
| 1.113 | 0.5 | 216 | 1.1231 |
| 1.413 | 0.75 | 324 | 1.1118 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.2+cu118
- Datasets 2.18.0
- Tokenizers 0.15.0 |
Owhslp/nous_researcher_tuning_2_17 | Owhslp | 2024-03-10T21:39:40Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T20:45:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AwAppp/benchmarks_4bit_batch_size10 | AwAppp | 2024-03-10T21:39:36Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-10T21:39:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AwAppp/benchmarks_4bit_batch_size5 | AwAppp | 2024-03-10T21:38:29Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-10T21:38:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SDFASDGA/llm | SDFASDGA | 2024-03-10T21:37:07Z | 10 | 1 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2023-11-11T11:30:08Z | Models for llm.f90 - LLMs in Fortran
See Files and https://github.com/rbitr/llm.f90 and https://github.com/rbitr/ferrite for more detail
|
automerger/ShadowCalme-7B | automerger | 2024-03-10T21:33:15Z | 7 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"base_model:CorticalStack/shadow-clown-7B-dare",
"base_model:merge:CorticalStack/shadow-clown-7B-dare",
"base_model:MaziyarPanahi/Calme-7B-Instruct-v0.1.1",
"base_model:merge:MaziyarPanahi/Calme-7B-Instruct-v0.1.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T21:32:28Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- automerger
base_model:
- CorticalStack/shadow-clown-7B-dare
- MaziyarPanahi/Calme-7B-Instruct-v0.1.1
---
# ShadowCalme-7B
ShadowCalme-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
* [CorticalStack/shadow-clown-7B-dare](https://huggingface.co/CorticalStack/shadow-clown-7B-dare)
* [MaziyarPanahi/Calme-7B-Instruct-v0.1.1](https://huggingface.co/MaziyarPanahi/Calme-7B-Instruct-v0.1.1)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: CorticalStack/shadow-clown-7B-dare
layer_range: [0, 32]
- model: MaziyarPanahi/Calme-7B-Instruct-v0.1.1
layer_range: [0, 32]
merge_method: slerp
base_model: CorticalStack/shadow-clown-7B-dare
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/ShadowCalme-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
myownip/axolotl-openllama-1k-qlora | myownip | 2024-03-10T21:33:10Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"llama",
"generated_from_trainer",
"base_model:openlm-research/open_llama_3b_v2",
"base_model:adapter:openlm-research/open_llama_3b_v2",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-03-10T21:33:05Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: openlm-research/open_llama_3b_v2
model-index:
- name: qlora-out
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: openlm-research/open_llama_3b_v2
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
push_dataset_to_hub:
datasets:
- path: mhenrichsen/alpaca_2k_test
type: alpaca
dataset_prepared_path:
val_set_size: 0.05
adapter: qlora
lora_model_dir:
sequence_len: 1024
sample_packing: true
lora_r: 8
lora_alpha: 32
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
output_dir: ./qlora-out
gradient_accumulation_steps: 1
micro_batch_size: 1
num_epochs: 1
optimizer: paged_adamw_32bit
torchdistx_path:
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: false
fp16: true
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
gptq_groupsize:
gptq_model_v1:
warmup_steps: 20
evals_per_epoch: 4
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.1
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```
</details><br>
# qlora-out
This model is a fine-tuned version of [openlm-research/open_llama_3b_v2](https://huggingface.co/openlm-research/open_llama_3b_v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1118
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2567 | 0.0 | 1 | 1.3470 |
| 1.1738 | 0.25 | 108 | 1.1365 |
| 1.113 | 0.5 | 216 | 1.1231 |
| 1.413 | 0.75 | 324 | 1.1118 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.2+cu118
- Datasets 2.18.0
- Tokenizers 0.15.0 |
ThuyNT03/CS505_MvPCOQE_viT5_Prompting5_top1_v2 | ThuyNT03 | 2024-03-10T21:32:22Z | 94 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"base_model:finetune:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-03-10T17:59:11Z | ---
license: mit
base_model: VietAI/vit5-large
tags:
- generated_from_trainer
model-index:
- name: CS505_MvPCOQE_viT5_Prompting5_top1_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505_MvPCOQE_viT5_Prompting5_top1_v2
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.2
|
sauravjoshi23/mistral-7B-hotpotqa | sauravjoshi23 | 2024-03-10T21:29:17Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-10T03:39:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
macadeliccc/laser-dolphin-mixtral-4x7b-dpo-AWQ | macadeliccc | 2024-03-10T21:28:48Z | 8 | 1 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"base_model:macadeliccc/laser-dolphin-mixtral-4x7b-dpo",
"base_model:quantized:macadeliccc/laser-dolphin-mixtral-4x7b-dpo",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] | text-generation | 2024-03-10T20:49:35Z | ---
license: apache-2.0
base_model: macadeliccc/laser-dolphin-mixtral-4x7b-dpo
---
## OpenAI compatible endpoint using VLLM
Runs well on 4090
```
python -m vllm.entrypoints.openai.api_server --model macadeliccc/laser-dolphin-mixtral-4x7b-dpo-AWQ --max-model-len 25000
``` |
GreatGatsby777/ppo-LunarLander-v2 | GreatGatsby777 | 2024-03-10T21:24:17Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-10T20:55:09Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 247.50 +/- 31.88
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
tsavage68/mistralit2_1000_STEPS_1e8_rate_0.1_beta_DPO | tsavage68 | 2024-03-10T21:22:33Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T21:18:48Z | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.2
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: mistralit2_1000_STEPS_1e8_rate_0.1_beta_DPO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistralit2_1000_STEPS_1e8_rate_0.1_beta_DPO
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6920
- Rewards/chosen: -0.0058
- Rewards/rejected: -0.0082
- Rewards/accuracies: 0.5121
- Rewards/margins: 0.0024
- Logps/rejected: -28.6543
- Logps/chosen: -23.4436
- Logits/rejected: -2.8649
- Logits/chosen: -2.8652
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-08
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.693 | 0.1 | 50 | 0.6928 | 0.0007 | -0.0000 | 0.4549 | 0.0007 | -28.5728 | -23.3792 | -2.8652 | -2.8654 |
| 0.693 | 0.2 | 100 | 0.6920 | 0.0012 | -0.0011 | 0.4945 | 0.0023 | -28.5838 | -23.3741 | -2.8653 | -2.8655 |
| 0.693 | 0.29 | 150 | 0.6923 | -0.0015 | -0.0033 | 0.4989 | 0.0018 | -28.6052 | -23.4006 | -2.8651 | -2.8653 |
| 0.694 | 0.39 | 200 | 0.6923 | -0.0020 | -0.0037 | 0.4813 | 0.0017 | -28.6093 | -23.4058 | -2.8651 | -2.8653 |
| 0.6916 | 0.49 | 250 | 0.6922 | -0.0026 | -0.0046 | 0.4879 | 0.0021 | -28.6189 | -23.4118 | -2.8651 | -2.8654 |
| 0.6927 | 0.59 | 300 | 0.6920 | -0.0039 | -0.0063 | 0.5011 | 0.0023 | -28.6350 | -23.4253 | -2.8650 | -2.8653 |
| 0.6941 | 0.68 | 350 | 0.6927 | -0.0048 | -0.0058 | 0.4659 | 0.0010 | -28.6304 | -23.4334 | -2.8650 | -2.8652 |
| 0.6924 | 0.78 | 400 | 0.6922 | -0.0049 | -0.0068 | 0.4989 | 0.0019 | -28.6399 | -23.4345 | -2.8650 | -2.8653 |
| 0.6919 | 0.88 | 450 | 0.6918 | -0.0056 | -0.0084 | 0.4857 | 0.0028 | -28.6562 | -23.4418 | -2.8650 | -2.8653 |
| 0.6913 | 0.98 | 500 | 0.6913 | -0.0047 | -0.0085 | 0.5077 | 0.0038 | -28.6577 | -23.4328 | -2.8649 | -2.8652 |
| 0.6914 | 1.07 | 550 | 0.6915 | -0.0034 | -0.0067 | 0.5143 | 0.0033 | -28.6398 | -23.4200 | -2.8650 | -2.8653 |
| 0.6939 | 1.17 | 600 | 0.6922 | -0.0069 | -0.0089 | 0.5033 | 0.0020 | -28.6613 | -23.4550 | -2.8650 | -2.8652 |
| 0.6917 | 1.27 | 650 | 0.6920 | -0.0056 | -0.0081 | 0.5231 | 0.0025 | -28.6535 | -23.4422 | -2.8650 | -2.8653 |
| 0.6919 | 1.37 | 700 | 0.6921 | -0.0052 | -0.0074 | 0.5055 | 0.0021 | -28.6463 | -23.4383 | -2.8650 | -2.8653 |
| 0.6929 | 1.46 | 750 | 0.6915 | -0.0044 | -0.0078 | 0.5363 | 0.0034 | -28.6506 | -23.4298 | -2.8650 | -2.8653 |
| 0.6919 | 1.56 | 800 | 0.6922 | -0.0063 | -0.0083 | 0.5209 | 0.0020 | -28.6553 | -23.4489 | -2.8649 | -2.8652 |
| 0.6925 | 1.66 | 850 | 0.6921 | -0.0058 | -0.0080 | 0.5121 | 0.0022 | -28.6528 | -23.4438 | -2.8649 | -2.8652 |
| 0.6925 | 1.76 | 900 | 0.6920 | -0.0058 | -0.0082 | 0.5121 | 0.0024 | -28.6543 | -23.4436 | -2.8649 | -2.8652 |
| 0.6939 | 1.86 | 950 | 0.6920 | -0.0058 | -0.0082 | 0.5121 | 0.0024 | -28.6543 | -23.4436 | -2.8649 | -2.8652 |
| 0.6924 | 1.95 | 1000 | 0.6920 | -0.0058 | -0.0082 | 0.5121 | 0.0024 | -28.6543 | -23.4436 | -2.8649 | -2.8652 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.0.0+cu117
- Datasets 2.18.0
- Tokenizers 0.15.2
|
AmineSaidi-ISTIC/phi-2-finetuned-knowledgator-events_classification | AmineSaidi-ISTIC | 2024-03-10T21:21:55Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-03-06T13:56:06Z | ---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: microsoft/phi-2
model-index:
- name: phi-2-finetuned-knowledgator-events_classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2-finetuned-knowledgator-events_classification
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 1000
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.2 |
KapilPathak/gemma_summary_7b | KapilPathak | 2024-03-10T21:17:13Z | 3 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-7b",
"base_model:adapter:google/gemma-7b",
"region:us"
] | null | 2024-03-10T03:41:18Z | ---
library_name: peft
base_model: google/gemma-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 |
togu6669/ql-Taxi-v3 | togu6669 | 2024-03-10T21:15:25Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-10T21:15:21Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: ql-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="togu6669/ql-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
togu6669/q-FrozenLake-v1-4x4-noSlippery | togu6669 | 2024-03-10T21:09:59Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-10T21:09:55Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="togu6669/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
bartowski/speechless-starcoder2-7b-exl2 | bartowski | 2024-03-10T20:55:17Z | 0 | 1 | transformers | [
"transformers",
"code",
"text-generation",
"en",
"dataset:teknium/OpenHermes-2.5",
"dataset:TokenBender/python_eval_instruct_51k",
"dataset:codefuse-ai/Evol-instruction-66k",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T20:41:05Z | ---
language:
- en
library_name: transformers
pipeline_tag: text-generation
datasets:
- teknium/OpenHermes-2.5
- TokenBender/python_eval_instruct_51k
- codefuse-ai/Evol-instruction-66k
tags:
- code
license: apache-2.0
model-index:
- name: SpeechlessCoder
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value: 0.0
verified: false
quantized_by: bartowski
---
## Exllama v2 Quantizations of speechless-starcoder2-7b
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.15">turboderp's ExLlamaV2 v0.0.15</a> for quantization.
## The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Conversion was done using the default calibration dataset.
Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.
Original model: https://huggingface.co/uukuguy/speechless-starcoder2-7b
<a href="https://huggingface.co/bartowski/speechless-starcoder2-7b-exl2/tree/8_0">8.0 bits per weight</a>
<a href="https://huggingface.co/bartowski/speechless-starcoder2-7b-exl2/tree/6_5">6.5 bits per weight</a>
<a href="https://huggingface.co/bartowski/speechless-starcoder2-7b-exl2/tree/5_0">5.0 bits per weight</a>
<a href="https://huggingface.co/bartowski/speechless-starcoder2-7b-exl2/tree/4_25">4.25 bits per weight</a>
<a href="https://huggingface.co/bartowski/speechless-starcoder2-7b-exl2/tree/3_5">3.5 bits per weight</a>
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/speechless-starcoder2-7b-exl2
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `speechless-starcoder2-7b-exl2`:
```shell
mkdir speechless-starcoder2-7b-exl2
huggingface-cli download bartowski/speechless-starcoder2-7b-exl2 --local-dir speechless-starcoder2-7b-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
Linux:
```shell
mkdir speechless-starcoder2-7b-exl2-6_5
huggingface-cli download bartowski/speechless-starcoder2-7b-exl2 --revision 6_5 --local-dir speechless-starcoder2-7b-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
mkdir speechless-starcoder2-7b-exl2-6.5
huggingface-cli download bartowski/speechless-starcoder2-7b-exl2 --revision 6_5 --local-dir speechless-starcoder2-7b-exl2-6.5 --local-dir-use-symlinks False
``` |
bartowski/dolphincoder-starcoder2-7b-exl2 | bartowski | 2024-03-10T20:43:19Z | 2 | 2 | null | [
"text-generation",
"en",
"dataset:cognitivecomputations/dolphin",
"dataset:jondurbin/airoboros-2.2.1",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:teknium/openhermes",
"dataset:ise-uiuc/Magicoder-OSS-Instruct-75K",
"dataset:ise-uiuc/Magicoder-Evol-Instruct-110K",
"dataset:m-a-p/Code-Feedback",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:microsoft/orca-math-word-problems-200k",
"license:bigcode-openrail-m",
"region:us"
] | text-generation | 2024-03-10T16:05:14Z | ---
datasets:
- cognitivecomputations/dolphin
- jondurbin/airoboros-2.2.1
- cognitivecomputations/dolphin-coder
- teknium/openhermes
- ise-uiuc/Magicoder-OSS-Instruct-75K
- ise-uiuc/Magicoder-Evol-Instruct-110K
- m-a-p/Code-Feedback
- m-a-p/CodeFeedback-Filtered-Instruction
- microsoft/orca-math-word-problems-200k
language:
- en
license: bigcode-openrail-m
quantized_by: bartowski
pipeline_tag: text-generation
---
## Exllama v2 Quantizations of dolphincoder-starcoder2-7b
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.15">turboderp's ExLlamaV2 v0.0.15</a> for quantization.
## The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/cognitivecomputations/dolphincoder-starcoder2-7b
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
| ----- | ---- | ------- | ------ | ------ | ------ | ------------ |
| [8_0](https://huggingface.co/bartowski/dolphincoder-starcoder2-7b-exl2/tree/8_0) | 8.0 | 8.0 | 8.4 GB | 9.2 GB | 10.2 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/bartowski/dolphincoder-starcoder2-7b-exl2/tree/6_5) | 6.5 | 8.0 | 7.1 GB | 7.9 GB | 8.9 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/bartowski/dolphincoder-starcoder2-7b-exl2/tree/5_0) | 5.0 | 6.0 | 5.8 GB | 6.6 GB | 7.6 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
| [4_25](https://huggingface.co/bartowski/dolphincoder-starcoder2-7b-exl2/tree/4_25) | 4.25 | 6.0 | 5.1 GB | 5.9 GB | 6.9 GB | GPTQ equivalent bits per weight, slightly higher quality. |
| [3_5](https://huggingface.co/bartowski/dolphincoder-starcoder2-7b-exl2/tree/3_5) | 3.5 | 6.0 | 4.5 GB | 5.3 GB | 6.3 GB | Lower quality, only use if you have to. |
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/dolphincoder-starcoder2-7b-exl2
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `dolphincoder-starcoder2-7b-exl2`:
```shell
mkdir dolphincoder-starcoder2-7b-exl2
huggingface-cli download bartowski/dolphincoder-starcoder2-7b-exl2 --local-dir dolphincoder-starcoder2-7b-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
Linux:
```shell
mkdir dolphincoder-starcoder2-7b-exl2-6_5
huggingface-cli download bartowski/dolphincoder-starcoder2-7b-exl2 --revision 6_5 --local-dir dolphincoder-starcoder2-7b-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
mkdir dolphincoder-starcoder2-7b-exl2-6.5
huggingface-cli download bartowski/dolphincoder-starcoder2-7b-exl2 --revision 6_5 --local-dir dolphincoder-starcoder2-7b-exl2-6.5 --local-dir-use-symlinks False
``` |
MaziyarPanahi/Saul-Instruct-v1-GGUF | MaziyarPanahi | 2024-03-10T20:38:52Z | 131 | 6 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"legal",
"conversational",
"en",
"arxiv:2403.03883",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:Equall/Saul-7B-Instruct-v1",
"base_model:quantized:Equall/Saul-7B-Instruct-v1"
] | text-generation | 2024-03-10T20:14:56Z | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- legal
- conversational
- en
- arxiv:2403.03883
- license:mit
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- text-generation
model_name: Saul-Instruct-v1-GGUF
base_model: Equall/Saul-Instruct-v1
inference: false
model_creator: Equall
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Saul-Instruct-v1-GGUF](https://huggingface.co/MaziyarPanahi/Saul-Instruct-v1-GGUF)
- Model creator: [Equall](https://huggingface.co/Equall)
- Original model: [Equall/Saul-Instruct-v1](https://huggingface.co/Equall/Saul-Instruct-v1)
## Description
[MaziyarPanahi/Saul-Instruct-v1-GGUF](https://huggingface.co/MaziyarPanahi/Saul-Instruct-v1-GGUF) contains GGUF format model files for [Equall/Saul-Instruct-v1](https://huggingface.co/Equall/Saul-Instruct-v1).
## How to use
Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models:
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
### Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: [MaziyarPanahi/Saul-Instruct-v1-GGUF](https://huggingface.co/MaziyarPanahi/Saul-Instruct-v1-GGUF) and below it, a specific filename to download, such as: Saul-Instruct-v1-GGUF.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download MaziyarPanahi/Saul-Instruct-v1-GGUF Saul-Instruct-v1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
</details>
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download [MaziyarPanahi/Saul-Instruct-v1-GGUF](https://huggingface.co/MaziyarPanahi/Saul-Instruct-v1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/Saul-Instruct-v1-GGUF Saul-Instruct-v1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Saul-Instruct-v1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Saul-Instruct-v1-GGUF.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Saul-Instruct-v1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) |
sweetfelinity/Reinforce-CartPole-v1 | sweetfelinity | 2024-03-10T20:33:22Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-10T20:33:13Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
fbellame/confoo-train-llama-style-1-1 | fbellame | 2024-03-10T20:33:00Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"conversational",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-10T19:06:14Z | ---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
---
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` library installed.
```bash
pip install transformers==4.36.1
```
Also make sure you are providing your huggingface token to the pipeline if the model is lying in a private repo.
- Either leave `token=True` in the `pipeline` and login to hugginface_hub by running
```python
import huggingface_hub
huggingface_hub.login(<ACCESS_TOKEN>)
```
- Or directly pass your <ACCESS_TOKEN> to `token` in the `pipeline`
```python
from transformers import pipeline
generate_text = pipeline(
model="fbellame/confoo-train-llama-style-1-1",
torch_dtype="auto",
trust_remote_code=True,
use_fast=True,
device_map={"": "cuda:0"},
token=True,
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.0),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
Why is drinking water so healthy?</s>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`.
```python
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"fbellame/confoo-train-llama-style-1-1",
use_fast=True,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"fbellame/confoo-train-llama-style-1-1",
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.0),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "fbellame/confoo-train-llama-style-1-1" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "How are you?</s>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=True,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.0),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Quantization and sharding
You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
## Model Architecture
```
MistralForCausalLM(
(model): MistralModel(
(embed_tokens): Embedding(32000, 4096, padding_idx=0)
(layers): ModuleList(
(0-31): 32 x MistralDecoderLayer(
(self_attn): MistralAttention(
(q_proj): Linear(in_features=4096, out_features=4096, bias=False)
(k_proj): Linear(in_features=4096, out_features=1024, bias=False)
(v_proj): Linear(in_features=4096, out_features=1024, bias=False)
(o_proj): Linear(in_features=4096, out_features=4096, bias=False)
(rotary_emb): MistralRotaryEmbedding()
)
(mlp): MistralMLP(
(gate_proj): Linear(in_features=4096, out_features=14336, bias=False)
(up_proj): Linear(in_features=4096, out_features=14336, bias=False)
(down_proj): Linear(in_features=14336, out_features=4096, bias=False)
(act_fn): SiLU()
)
(input_layernorm): MistralRMSNorm()
(post_attention_layernorm): MistralRMSNorm()
)
)
(norm): MistralRMSNorm()
)
(lm_head): Linear(in_features=4096, out_features=32000, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it. |
jairNeto/bert-finetuned-sem_eval-english | jairNeto | 2024-03-10T20:24:14Z | 94 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:Jairnetojp/content-moderation",
"base_model:finetune:Jairnetojp/content-moderation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-10T20:23:02Z | ---
base_model: Jairnetojp/content-moderation
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: bert-finetuned-sem_eval-english
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-sem_eval-english
This model is a fine-tuned version of [Jairnetojp/content-moderation](https://huggingface.co/Jairnetojp/content-moderation) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2343
- F1: 0.5458
- Roc Auc: 0.7829
- Accuracy: 0.4655
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| No log | 1.0 | 232 | 0.2283 | 0.5361 | 0.7829 | 0.4503 |
| No log | 2.0 | 464 | 0.2343 | 0.5458 | 0.7829 | 0.4655 |
| 0.069 | 3.0 | 696 | 0.2461 | 0.5392 | 0.7832 | 0.4544 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Bienvenu2004/donut-base-pv-aws2 | Bienvenu2004 | 2024-03-10T20:19:19Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-03-10T07:14:47Z | ---
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-pv-aws2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-pv-aws2
This model was trained from scratch on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Balassar/balassarprofile | Balassar | 2024-03-10T20:18:41Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-03-10T20:10:46Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: TheBloke/Mistral-7B-Instruct-v0.2-GPTQ
model-index:
- name: balassarprofile
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# balassarprofile
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8042
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8802 | 1.0 | 1 | 3.6270 |
| 3.861 | 2.0 | 2 | 3.5746 |
| 3.7758 | 3.0 | 3 | 3.4416 |
| 3.5819 | 4.0 | 4 | 3.3048 |
| 3.3879 | 5.0 | 5 | 3.1740 |
| 3.2106 | 6.0 | 6 | 3.0575 |
| 3.0652 | 7.0 | 7 | 2.9588 |
| 2.94 | 8.0 | 8 | 2.8822 |
| 2.8566 | 9.0 | 9 | 2.8301 |
| 2.7926 | 10.0 | 10 | 2.8042 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
xKizzi/taxirepo | xKizzi | 2024-03-10T20:13:31Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-10T20:13:29Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxirepo
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.34 +/- 2.46
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="xKizzi/taxirepo", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
eduvedras/pix2struct-textcaps-base-desc-vars-final | eduvedras | 2024-03-10T20:05:13Z | 35 | 0 | transformers | [
"transformers",
"safetensors",
"pix2struct",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-03-10T19:03:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yvzplay2/PT-deneme1 | yvzplay2 | 2024-03-10T20:02:20Z | 94 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:rotten_tomatoes",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-10T19:51:17Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- rotten_tomatoes
model-index:
- name: PT-deneme1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PT-deneme1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the rotten_tomatoes dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 1.9.1+cu111
- Datasets 2.13.2
- Tokenizers 0.13.3
|
MaziyarPanahi/Saul-Base-GGUF | MaziyarPanahi | 2024-03-10T20:02:13Z | 89 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"legal",
"conversational",
"en",
"arxiv:2403.03883",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:Equall/Saul-7B-Base",
"base_model:quantized:Equall/Saul-7B-Base"
] | text-generation | 2024-03-10T19:38:43Z | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- legal
- conversational
- en
- arxiv:2403.03883
- license:mit
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- text-generation
model_name: Saul-Base-GGUF
base_model: Equall/Saul-Base
inference: false
model_creator: Equall
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Saul-Base-GGUF](https://huggingface.co/MaziyarPanahi/Saul-Base-GGUF)
- Model creator: [Equall](https://huggingface.co/Equall)
- Original model: [Equall/Saul-Base](https://huggingface.co/Equall/Saul-Base)
## Description
[MaziyarPanahi/Saul-Base-GGUF](https://huggingface.co/MaziyarPanahi/Saul-Base-GGUF) contains GGUF format model files for [Equall/Saul-Base](https://huggingface.co/Equall/Saul-Base).
## How to use
Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models:
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
### Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: [MaziyarPanahi/Saul-Base-GGUF](https://huggingface.co/MaziyarPanahi/Saul-Base-GGUF) and below it, a specific filename to download, such as: Saul-Base-GGUF.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download MaziyarPanahi/Saul-Base-GGUF Saul-Base-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
</details>
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download [MaziyarPanahi/Saul-Base-GGUF](https://huggingface.co/MaziyarPanahi/Saul-Base-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/Saul-Base-GGUF Saul-Base-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Saul-Base-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Saul-Base-GGUF.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Saul-Base-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) |
deepnet/SN6-71G5 | deepnet | 2024-03-10T20:01:53Z | 91 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T19:58:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
CurtisJeon/klue-roberta-large-korquad_v1_qa | CurtisJeon | 2024-03-10T20:00:56Z | 93 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"ko",
"dataset:squad_kor_v1",
"arxiv:1910.09700",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-02-19T13:33:47Z | ---
license: mit
datasets:
- squad_kor_v1
language:
- ko
metrics:
- exact_match
- f1
pipeline_tag: question-answering
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
- Open-Domain Question Answering Extraction Model for Korean.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** CurtisJeon
- **Model type:** Question Answering
- **Language(s) (NLP):** KR
- **License:** MIT
- **Finetuned from model [optional]:** klue/roberta-large
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
- fine-tuned data: squad_kor_v1
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ignaciosg/blueCarbon | ignaciosg | 2024-03-10T19:59:28Z | 49 | 0 | setfit | [
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"region:us"
] | text-classification | 2024-02-19T22:45:28Z | ---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- accuracy
widget:
- text: physiological metabolisms of seaweeds usually suffered climate changes in
the field. gracilariopsis lemaneiformis and ulva lactuca, collected from nan ao
island, shantou, china, were cultured under ambient and elevated co2 supply ,
with low and high temperatures for weeks, aiming to compare the difference of
the main physiological metabolism between two seaweed species in response to the
elevated co2 and high temperature. at 15 , the ph reduction in the culture medium
caused by elevated co2 was larger in . lemaneiformis than in . lactuca. at 25
, elevated co2 significantly increased photosynthetic rates and maintained constant
respiratory rates in . lemaneiformis. however, for 25 grown . lactuca, the increment
of co2 did not enhance the pn rates but rapidly decreased the rd rates itself.
with the higher rd pg ratios in . lemaneiformis than . lactuca, the warming thereby
promoted more allocation of photosynthetic products to respiratory consumption
in . lemaneiformis. both pg and rd rates exhibited lower temperature acclimation
in two seaweeds. in addition, elevated co2 markedly increased the relative growth
rate and phycobiliprotein contents at 25 , but exhibited no enhancement of chlorophyll
, carotenoids , soluble carbohydrate , and soluble protein contents in . lemaneiformis,
with the reduction of sc when temperature increased only. we suggested that climate
changes were probably more benefit to . lactuca than to . lemaneiformis, inherently
justifying the metabolism during . lemaneiformis maricultivation. 2018, springer
verlag gmbh germany, part of springer nature.
- text: blue carbon is vital aspect of climate change mitigation, which necessitates
the identification of stocks and drivers for implementing mitigation strategies.
however, reclamation may be among the most invasive forms, and the question of
its influence has not been addressed well in blue carbon research. therefore,
the effects of reclamation on carbon stocks and the interaction of crucial drivers
from reclamation time areas were evaluated in the liaohe river delta and compared
with natural reserves . carbon stocks based on invest model were lower than preexisting
conditions . one way analysis of variance showed that average carbon stocks accumulated
55 years after reclamation and reached the lowest value in 85 years. the interaction
analysis of dominant drivers affecting carbon showed the difference between reclaimed
areas and reserves regarding potential effect pathways. in the 1930s and 1960s
reclamation time areas, crop yield and industrial output determined blue carbon
by changing no3 and ap. in the 1990s reclamation time area, population density
played an important role. in defining the impact of vegetation cover on carbon
within the reserves, the distance to the coast and residence were significant
factors. this study demonstrated that coastal
- text: multiple techniques, including thermal infrared aerial remote sensing, geophysical
and geological data, geochemical characterization and radium isotopes, were used
to evaluate the role of groundwater as source of dissolved nutrients, carbon,
and trace gases to the okatee river estuary, south carolina. thermal infrared
aerial remote sensing surveys illustrated the presence of multiple submarine groundwater
discharge sites in okatee headwaters. significant relationships were observed
between groundwater geochemical constituents and ra 226 activity in groundwater
with higher ra 226 activity correlated to higher concentrations of organics, dissolved
inorganic carbon, nutrients, and trace gases to the okatee system. system level
radium mass balance confirmed substantial submarine groundwater discharge contribution
of these constituents to the okatee river. diffusive benthic flux measurements
and potential denitrification rate assays tracked the fate of constituents in
creek bank sediments. diffusive benthic fluxes were substantially lower than calculated
radium based submarine groundwater discharge inputs, showing that advection of
groundwater derived nutrients dominated fluxes in the system. while considerable
potential for denitrification in tidal creek bank sediments was noted, in situ
denitrification rates were nitrate limited, making intertidal sediments an inefficient
nitrogen sink in this system. groundwater geochemical data indicated significant
differences in groundwater chemical composition and radium activity ratios between
the eastern and western sides of the river; these likely arose from the distinct
hydrological regimes observed in each area. groundwater from the western side
of the okatee headwaters was characterized by higher concentrations of dissolved
organic and inorganic carbon, dissolved organic nitrogen, inorganic nutrients
and reduced metabolites and trace gases, .. methane and nitrous oxide, than groundwater
from the eastern side. differences in microbial sulfate reduction, organic matter
supply, and or groundwater residence time likely contributed to this pattern.
the contrasting features of the east and west sub marsh zones highlight the need
for multiple techniques for characterization of submarine groundwater discharge
sources and the impact of biogeochemical processes on the delivery of nutrients
and carbon to coastal areas via submarine groundwater discharge. 2014 elsevier
ltd. all rights reserved.
- text: blue carbon ecosystem initiatives in the coral triangle region are increasing
due to their amplified recognition in mitigating global climate change. although
transdisciplinary approaches in the blue carbon discourse and collaborative actions
are gaining momentum in the international and national arenas, more work is still
needed at the local level. the study pursues how bce initiatives permeate through
the local communities in the philippines and indonesia, as part of ctr. using
perception surveys, the coastal residents from busuanga, philippines, and karimunjawa,
indonesia were interviewed on their awareness, utilization, perceived threats,
and management strategies for bces. potential factors affecting residents perceptions
were explored using multivariate regression and correlation analyses. also, comparative
analysis was done to determine distinctions and commonalities in perceptions as
influenced by site specific scenarios. results show that, despite respondents
presenting relatively high awareness of bce services, levels of utilization are
low with 42. 92. and 23. 85. respondents in busuanga and karimunjawa, respectively,
not directly utilizing bce resources. regression analysis showed that respondents
occupation significantly influenced their utilization rate and observed opposite
correlations in busuanga and karimunjawa . perceived threats are found to be driven
by personal experiences occurrence of natural disasters in busuanga whereas discerned
anthropogenic activities in karimunjawa. meanwhile, recognized management strategies
are influenced by the strong presence of relevant agencies like non government
and people organizations in busuanga and the local government in karimunjawa.
these results can be translated as useful metrics in contextualizing and or enhancing
bce management plans specifically in strategizing advocacy campaigns and engagement
of local stakeholders across the ctr.
- text: mangrove wetlands are important ecosystems, yet human development coupled
with climate change threatens mangroves and their large carbon stores. this study
seeks to understand the soil carbon dynamics in hydrologically altered mangrove
swamps by studying aboveground biomass estimates and belowground soil carbon concentrations
in mangrove swamps with high, medium, and low levels of disturbance in catano,
jobos bay, and vieques, puerto rico. all three sites were affected by hurricane
maria in 2017, one year prior to the study. as result of being hit by the saffir
simpson category hurricane, the low disturbance site had almost no living mangroves
left during sampling. there was no correlation between level of hydrologic alteration
and carbon storage, rather different patterns emerged for each of the three sites.
at the highly disturbed location, belowground carbon mass averaged .048 .001 cm
which increased with increased aboveground biomass. at the moderately disturbed
location, belowground carbon mass averaged .047 .003 cm and corresponded to distance
from open water. at the low disturbed location, organic carbon was consistent
between all sites and inorganic carbon concentrations controlled total carbon
mass which averaged .048 .002 cm. these results suggest that mangroves are adaptive
and resilient and have the potential to retain their carbon storage capacities
despite hydrologic alterations, but mass carbon storage within mangrove forests
can be spatially variable in hydrologically altered conditions.
pipeline_tag: text-classification
inference: false
base_model: sentence-transformers/paraphrase-mpnet-base-v2
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A MultiOutputClassifier instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a MultiOutputClassifier instance
- **Maximum Sequence Length:** 512 tokens
<!-- - **Number of Classes:** Unknown -->
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("ignaciosg/blueCarbon")
# Run inference
preds = model("blue carbon is vital aspect of climate change mitigation, which necessitates the identification of stocks and drivers for implementing mitigation strategies. however, reclamation may be among the most invasive forms, and the question of its influence has not been addressed well in blue carbon research. therefore, the effects of reclamation on carbon stocks and the interaction of crucial drivers from reclamation time areas were evaluated in the liaohe river delta and compared with natural reserves . carbon stocks based on invest model were lower than preexisting conditions . one way analysis of variance showed that average carbon stocks accumulated 55 years after reclamation and reached the lowest value in 85 years. the interaction analysis of dominant drivers affecting carbon showed the difference between reclaimed areas and reserves regarding potential effect pathways. in the 1930s and 1960s reclamation time areas, crop yield and industrial output determined blue carbon by changing no3 and ap. in the 1990s reclamation time area, population density played an important role. in defining the impact of vegetation cover on carbon within the reserves, the distance to the coast and residence were significant factors. this study demonstrated that coastal")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 105 | 229.475 | 432 |
### Training Hyperparameters
- batch_size: (1, 1)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.0006155918397454662
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- max_length: 1000
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:-----:|:-------------:|:---------------:|
| 0.0000 | 1 | 0.1819 | - |
| 0.0011 | 50 | 0.201 | - |
| 0.0023 | 100 | 0.3533 | - |
| 0.0034 | 150 | 0.0788 | - |
| 0.0046 | 200 | 0.1445 | - |
| 0.0057 | 250 | 0.1584 | - |
| 0.0069 | 300 | 0.3425 | - |
| 0.0080 | 350 | 0.1203 | - |
| 0.0092 | 400 | 0.2045 | - |
| 0.0103 | 450 | 0.0287 | - |
| 0.0115 | 500 | 0.1784 | - |
| 0.0126 | 550 | 0.2521 | - |
| 0.0138 | 600 | 0.1285 | - |
| 0.0149 | 650 | 0.2292 | - |
| 0.0161 | 700 | 0.0943 | - |
| 0.0172 | 750 | 0.1753 | - |
| 0.0184 | 800 | 0.3433 | - |
| 0.0195 | 850 | 0.262 | - |
| 0.0207 | 900 | 0.1097 | - |
| 0.0218 | 950 | 0.0015 | - |
| 0.0230 | 1000 | 0.5522 | - |
| 0.0241 | 1050 | 0.5939 | - |
| 0.0253 | 1100 | 0.1134 | - |
| 0.0264 | 1150 | 0.1258 | - |
| 0.0276 | 1200 | 0.0146 | - |
| 0.0287 | 1250 | 0.0467 | - |
| 0.0299 | 1300 | 0.3501 | - |
| 0.0310 | 1350 | 0.291 | - |
| 0.0322 | 1400 | 0.0569 | - |
| 0.0333 | 1450 | 0.0812 | - |
| 0.0345 | 1500 | 0.3397 | - |
| 0.0356 | 1550 | 0.1664 | - |
| 0.0368 | 1600 | 0.3841 | - |
| 0.0379 | 1650 | 0.1659 | - |
| 0.0391 | 1700 | 0.0809 | - |
| 0.0402 | 1750 | 0.3604 | - |
| 0.0414 | 1800 | 0.0056 | - |
| 0.0425 | 1850 | 0.3335 | - |
| 0.0437 | 1900 | 0.0005 | - |
| 0.0448 | 1950 | 0.1624 | - |
| 0.0460 | 2000 | 0.8162 | - |
| 0.0471 | 2050 | 0.0097 | - |
| 0.0483 | 2100 | 0.2561 | - |
| 0.0494 | 2150 | 0.0003 | - |
| 0.0506 | 2200 | 0.4198 | - |
| 0.0517 | 2250 | 0.0002 | - |
| 0.0529 | 2300 | 0.2793 | - |
| 0.0540 | 2350 | 0.6288 | - |
| 0.0552 | 2400 | 0.6944 | - |
| 0.0563 | 2450 | 0.7394 | - |
| 0.0575 | 2500 | 0.011 | - |
| 0.0586 | 2550 | 0.8041 | - |
| 0.0598 | 2600 | 0.0041 | - |
| 0.0609 | 2650 | 0.2446 | - |
| 0.0621 | 2700 | 0.2759 | - |
| 0.0632 | 2750 | 0.151 | - |
| 0.0644 | 2800 | 0.0651 | - |
| 0.0655 | 2850 | 0.0026 | - |
| 0.0666 | 2900 | 0.0845 | - |
| 0.0678 | 2950 | 0.7541 | - |
| 0.0689 | 3000 | 0.0993 | - |
| 0.0701 | 3050 | 0.7355 | - |
| 0.0712 | 3100 | 0.6959 | - |
| 0.0724 | 3150 | 0.1687 | - |
| 0.0735 | 3200 | 0.2048 | - |
| 0.0747 | 3250 | 0.0906 | - |
| 0.0758 | 3300 | 0.0582 | - |
| 0.0770 | 3350 | 0.9064 | - |
| 0.0781 | 3400 | 0.8038 | - |
| 0.0793 | 3450 | 0.2515 | - |
| 0.0804 | 3500 | 0.0196 | - |
| 0.0816 | 3550 | 0.0081 | - |
| 0.0827 | 3600 | 0.8483 | - |
| 0.0839 | 3650 | 0.0651 | - |
| 0.0850 | 3700 | 0.8224 | - |
| 0.0862 | 3750 | 0.2872 | - |
| 0.0873 | 3800 | 0.0506 | - |
| 0.0885 | 3850 | 0.6795 | - |
| 0.0896 | 3900 | 0.0126 | - |
| 0.0908 | 3950 | 0.5083 | - |
| 0.0919 | 4000 | 0.0215 | - |
| 0.0931 | 4050 | 0.8133 | - |
| 0.0942 | 4100 | 0.1534 | - |
| 0.0954 | 4150 | 0.2397 | - |
| 0.0965 | 4200 | 0.8576 | - |
| 0.0977 | 4250 | 0.0554 | - |
| 0.0988 | 4300 | 0.1018 | - |
| 0.1000 | 4350 | 0.3324 | - |
| 0.1011 | 4400 | 0.0221 | - |
| 0.1023 | 4450 | 0.0516 | - |
| 0.1034 | 4500 | 0.796 | - |
| 0.1046 | 4550 | 0.0903 | - |
| 0.1057 | 4600 | 0.1979 | - |
| 0.1069 | 4650 | 0.9194 | - |
| 0.1080 | 4700 | 0.2556 | - |
| 0.1092 | 4750 | 0.7224 | - |
| 0.1103 | 4800 | 0.0012 | - |
| 0.1115 | 4850 | 0.5042 | - |
| 0.1126 | 4900 | 0.5732 | - |
| 0.1138 | 4950 | 0.1041 | - |
| 0.1149 | 5000 | 0.0247 | - |
| 0.1161 | 5050 | 0.0265 | - |
| 0.1172 | 5100 | 0.0126 | - |
| 0.1184 | 5150 | 0.0098 | - |
| 0.1195 | 5200 | 0.0386 | - |
| 0.1207 | 5250 | 0.001 | - |
| 0.1218 | 5300 | 0.9248 | - |
| 0.1230 | 5350 | 0.4783 | - |
| 0.1241 | 5400 | 0.1841 | - |
| 0.1253 | 5450 | 0.4721 | - |
| 0.1264 | 5500 | 0.0601 | - |
| 0.1276 | 5550 | 0.0073 | - |
| 0.1287 | 5600 | 0.0028 | - |
| 0.1298 | 5650 | 0.012 | - |
| 0.1310 | 5700 | 0.0451 | - |
| 0.1321 | 5750 | 0.0125 | - |
| 0.1333 | 5800 | 0.5423 | - |
| 0.1344 | 5850 | 0.7545 | - |
| 0.1356 | 5900 | 0.0158 | - |
| 0.1367 | 5950 | 0.1388 | - |
| 0.1379 | 6000 | 0.0136 | - |
| 0.1390 | 6050 | 0.0043 | - |
| 0.1402 | 6100 | 0.4147 | - |
| 0.1413 | 6150 | 0.0503 | - |
| 0.1425 | 6200 | 0.0347 | - |
| 0.1436 | 6250 | 0.0465 | - |
| 0.1448 | 6300 | 0.0086 | - |
| 0.1459 | 6350 | 0.8752 | - |
| 0.1471 | 6400 | 0.5546 | - |
| 0.1482 | 6450 | 0.0348 | - |
| 0.1494 | 6500 | 0.0853 | - |
| 0.1505 | 6550 | 0.6107 | - |
| 0.1517 | 6600 | 0.005 | - |
| 0.1528 | 6650 | 0.3526 | - |
| 0.1540 | 6700 | 0.2429 | - |
| 0.1551 | 6750 | 0.6727 | - |
| 0.1563 | 6800 | 0.0019 | - |
| 0.1574 | 6850 | 0.6662 | - |
| 0.1586 | 6900 | 0.0068 | - |
| 0.1597 | 6950 | 0.0117 | - |
| 0.1609 | 7000 | 0.4718 | - |
| 0.1620 | 7050 | 0.0072 | - |
| 0.1632 | 7100 | 0.8174 | - |
| 0.1643 | 7150 | 0.0094 | - |
| 0.1655 | 7200 | 0.0241 | - |
| 0.1666 | 7250 | 0.1359 | - |
| 0.1678 | 7300 | 0.0528 | - |
| 0.1689 | 7350 | 0.0184 | - |
| 0.1701 | 7400 | 0.2204 | - |
| 0.1712 | 7450 | 0.3476 | - |
| 0.1724 | 7500 | 0.1153 | - |
| 0.1735 | 7550 | 0.0717 | - |
| 0.1747 | 7600 | 0.022 | - |
| 0.1758 | 7650 | 0.0311 | - |
| 0.1770 | 7700 | 0.4385 | - |
| 0.1781 | 7750 | 0.4274 | - |
| 0.1793 | 7800 | 0.4994 | - |
| 0.1804 | 7850 | 0.2518 | - |
| 0.1816 | 7900 | 0.8652 | - |
| 0.1827 | 7950 | 0.0019 | - |
| 0.1839 | 8000 | 0.01 | - |
| 0.1850 | 8050 | 0.0129 | - |
| 0.1862 | 8100 | 0.0001 | - |
| 0.1873 | 8150 | 0.0005 | - |
| 0.1885 | 8200 | 0.0199 | - |
| 0.1896 | 8250 | 0.1489 | - |
| 0.1908 | 8300 | 0.0016 | - |
| 0.1919 | 8350 | 0.5111 | - |
| 0.1931 | 8400 | 0.807 | - |
| 0.1942 | 8450 | 0.1489 | - |
| 0.1953 | 8500 | 0.29 | - |
| 0.1965 | 8550 | 0.0001 | - |
| 0.1976 | 8600 | 0.0043 | - |
| 0.1988 | 8650 | 0.0041 | - |
| 0.1999 | 8700 | 0.3061 | - |
| 0.2011 | 8750 | 0.0221 | - |
| 0.2022 | 8800 | 0.801 | - |
| 0.2034 | 8850 | 0.2316 | - |
| 0.2045 | 8900 | 0.2784 | - |
| 0.2057 | 8950 | 0.0957 | - |
| 0.2068 | 9000 | 0.611 | - |
| 0.2080 | 9050 | 0.7529 | - |
| 0.2091 | 9100 | 0.0565 | - |
| 0.2103 | 9150 | 0.0114 | - |
| 0.2114 | 9200 | 0.2864 | - |
| 0.2126 | 9250 | 0.1954 | - |
| 0.2137 | 9300 | 0.7993 | - |
| 0.2149 | 9350 | 0.0501 | - |
| 0.2160 | 9400 | 0.0051 | - |
| 0.2172 | 9450 | 0.6012 | - |
| 0.2183 | 9500 | 0.0131 | - |
| 0.2195 | 9550 | 0.0157 | - |
| 0.2206 | 9600 | 0.0606 | - |
| 0.2218 | 9650 | 0.9143 | - |
| 0.2229 | 9700 | 0.0001 | - |
| 0.2241 | 9750 | 0.0021 | - |
| 0.2252 | 9800 | 0.0004 | - |
| 0.2264 | 9850 | 0.0498 | - |
| 0.2275 | 9900 | 0.0021 | - |
| 0.2287 | 9950 | 0.8591 | - |
| 0.2298 | 10000 | 0.2218 | - |
| 0.2310 | 10050 | 0.0065 | - |
| 0.2321 | 10100 | 0.0924 | - |
| 0.2333 | 10150 | 0.8866 | - |
| 0.2344 | 10200 | 0.0004 | - |
| 0.2356 | 10250 | 0.1434 | - |
| 0.2367 | 10300 | 0.0118 | - |
| 0.2379 | 10350 | 0.025 | - |
| 0.2390 | 10400 | 0.8472 | - |
| 0.2402 | 10450 | 0.0352 | - |
| 0.2413 | 10500 | 0.0105 | - |
| 0.2425 | 10550 | 0.0025 | - |
| 0.2436 | 10600 | 0.0042 | - |
| 0.2448 | 10650 | 0.3461 | - |
| 0.2459 | 10700 | 0.0314 | - |
| 0.2471 | 10750 | 0.1411 | - |
| 0.2482 | 10800 | 0.0006 | - |
| 0.2494 | 10850 | 0.0013 | - |
| 0.2505 | 10900 | 0.894 | - |
| 0.2517 | 10950 | 0.9961 | - |
| 0.2528 | 11000 | 0.9908 | - |
| 0.2540 | 11050 | 0.836 | - |
| 0.2551 | 11100 | 0.8847 | - |
| 0.2563 | 11150 | 0.8493 | - |
| 0.2574 | 11200 | 0.5851 | - |
| 0.2585 | 11250 | 0.9502 | - |
| 0.2597 | 11300 | 0.8396 | - |
| 0.2608 | 11350 | 0.1942 | - |
| 0.2620 | 11400 | 0.9298 | - |
| 0.2631 | 11450 | 0.742 | - |
| 0.2643 | 11500 | 0.8624 | - |
| 0.2654 | 11550 | 0.5423 | - |
| 0.2666 | 11600 | 0.8576 | - |
| 0.2677 | 11650 | 0.8042 | - |
| 0.2689 | 11700 | 0.7447 | - |
| 0.2700 | 11750 | 0.5319 | - |
| 0.2712 | 11800 | 0.451 | - |
| 0.2723 | 11850 | 0.4115 | - |
| 0.2735 | 11900 | 0.6772 | - |
| 0.2746 | 11950 | 0.4701 | - |
| 0.2758 | 12000 | 0.6101 | - |
| 0.2769 | 12050 | 0.4914 | - |
| 0.2781 | 12100 | 0.653 | - |
| 0.2792 | 12150 | 0.6205 | - |
| 0.2804 | 12200 | 0.651 | - |
| 0.2815 | 12250 | 0.2223 | - |
| 0.2827 | 12300 | 0.7124 | - |
| 0.2838 | 12350 | 0.6502 | - |
| 0.2850 | 12400 | 0.5812 | - |
| 0.2861 | 12450 | 0.6483 | - |
| 0.2873 | 12500 | 0.7335 | - |
| 0.2884 | 12550 | 0.239 | - |
| 0.2896 | 12600 | 0.6499 | - |
| 0.2907 | 12650 | 0.4453 | - |
| 0.2919 | 12700 | 0.7152 | - |
| 0.2930 | 12750 | 0.5551 | - |
| 0.2942 | 12800 | 0.6034 | - |
| 0.2953 | 12850 | 0.5714 | - |
| 0.2965 | 12900 | 0.5867 | - |
| 0.2976 | 12950 | 0.4249 | - |
| 0.2988 | 13000 | 0.7262 | - |
| 0.2999 | 13050 | 0.542 | - |
| 0.3011 | 13100 | 0.5301 | - |
| 0.3022 | 13150 | 0.7503 | - |
| 0.3034 | 13200 | 0.6918 | - |
| 0.3045 | 13250 | 0.5352 | - |
| 0.3057 | 13300 | 0.6065 | - |
| 0.3068 | 13350 | 0.373 | - |
| 0.3080 | 13400 | 0.7648 | - |
| 0.3091 | 13450 | 0.2762 | - |
| 0.3103 | 13500 | 0.708 | - |
| 0.3114 | 13550 | 0.1481 | - |
| 0.3126 | 13600 | 0.7231 | - |
| 0.3137 | 13650 | 0.6023 | - |
| 0.3149 | 13700 | 0.7021 | - |
| 0.3160 | 13750 | 0.5843 | - |
| 0.3172 | 13800 | 0.7361 | - |
| 0.3183 | 13850 | 0.7844 | - |
| 0.3195 | 13900 | 0.51 | - |
| 0.3206 | 13950 | 0.506 | - |
| 0.3218 | 14000 | 0.3072 | - |
| 0.3229 | 14050 | 0.5854 | - |
| 0.3240 | 14100 | 0.3553 | - |
| 0.3252 | 14150 | 0.6827 | - |
| 0.3263 | 14200 | 0.5342 | - |
| 0.3275 | 14250 | 0.6887 | - |
| 0.3286 | 14300 | 0.6007 | - |
| 0.3298 | 14350 | 0.4573 | - |
| 0.3309 | 14400 | 0.5979 | - |
| 0.3321 | 14450 | 0.5328 | - |
| 0.3332 | 14500 | 0.6814 | - |
| 0.3344 | 14550 | 0.6207 | - |
| 0.3355 | 14600 | 0.8189 | - |
| 0.3367 | 14650 | 0.5794 | - |
| 0.3378 | 14700 | 0.3987 | - |
| 0.3390 | 14750 | 0.5281 | - |
| 0.3401 | 14800 | 0.652 | - |
| 0.3413 | 14850 | 0.6811 | - |
| 0.3424 | 14900 | 0.3334 | - |
| 0.3436 | 14950 | 0.565 | - |
| 0.3447 | 15000 | 0.4956 | - |
| 0.3459 | 15050 | 0.7289 | - |
| 0.3470 | 15100 | 0.6103 | - |
| 0.3482 | 15150 | 0.4173 | - |
| 0.3493 | 15200 | 0.2138 | - |
| 0.3505 | 15250 | 0.893 | - |
| 0.3516 | 15300 | 0.5385 | - |
| 0.3528 | 15350 | 0.6386 | - |
| 0.3539 | 15400 | 0.7168 | - |
| 0.3551 | 15450 | 0.1189 | - |
| 0.3562 | 15500 | 0.3046 | - |
| 0.3574 | 15550 | 0.4776 | - |
| 0.3585 | 15600 | 0.7062 | - |
| 0.3597 | 15650 | 0.0972 | - |
| 0.3608 | 15700 | 0.4485 | - |
| 0.3620 | 15750 | 0.5843 | - |
| 0.3631 | 15800 | 0.5656 | - |
| 0.3643 | 15850 | 0.5682 | - |
| 0.3654 | 15900 | 0.416 | - |
| 0.3666 | 15950 | 0.2427 | - |
| 0.3677 | 16000 | 0.4942 | - |
| 0.3689 | 16050 | 0.4734 | - |
| 0.3700 | 16100 | 0.7099 | - |
| 0.3712 | 16150 | 0.5899 | - |
| 0.3723 | 16200 | 0.3502 | - |
| 0.3735 | 16250 | 0.3448 | - |
| 0.3746 | 16300 | 0.6606 | - |
| 0.3758 | 16350 | 0.5239 | - |
| 0.3769 | 16400 | 0.6872 | - |
| 0.3781 | 16450 | 0.2828 | - |
| 0.3792 | 16500 | 0.6973 | - |
| 0.3804 | 16550 | 0.6628 | - |
| 0.3815 | 16600 | 0.6429 | - |
| 0.3827 | 16650 | 0.4321 | - |
| 0.3838 | 16700 | 0.6626 | - |
| 0.3850 | 16750 | 0.5044 | - |
| 0.3861 | 16800 | 0.7683 | - |
| 0.3872 | 16850 | 0.6687 | - |
| 0.3884 | 16900 | 0.5821 | - |
| 0.3895 | 16950 | 0.6572 | - |
| 0.3907 | 17000 | 0.9609 | - |
| 0.3918 | 17050 | 0.0123 | - |
| 0.3930 | 17100 | 0.5649 | - |
| 0.3941 | 17150 | 0.1006 | - |
| 0.3953 | 17200 | 0.003 | - |
| 0.3964 | 17250 | 0.278 | - |
| 0.3976 | 17300 | 0.8632 | - |
| 0.3987 | 17350 | 0.5101 | - |
| 0.3999 | 17400 | 0.8753 | - |
| 0.4010 | 17450 | 0.3195 | - |
| 0.4022 | 17500 | 0.9436 | - |
| 0.4033 | 17550 | 0.9388 | - |
| 0.4045 | 17600 | 0.0097 | - |
| 0.4056 | 17650 | 0.6898 | - |
| 0.4068 | 17700 | 0.035 | - |
| 0.4079 | 17750 | 0.4828 | - |
| 0.4091 | 17800 | 0.1888 | - |
| 0.4102 | 17850 | 0.0354 | - |
| 0.4114 | 17900 | 0.0008 | - |
| 0.4125 | 17950 | 0.2885 | - |
| 0.4137 | 18000 | 0.0624 | - |
| 0.4148 | 18050 | 0.5545 | - |
| 0.4160 | 18100 | 0.5317 | - |
| 0.4171 | 18150 | 0.0207 | - |
| 0.4183 | 18200 | 0.0228 | - |
| 0.4194 | 18250 | 0.0168 | - |
| 0.4206 | 18300 | 0.0935 | - |
| 0.4217 | 18350 | 0.8391 | - |
| 0.4229 | 18400 | 0.0005 | - |
| 0.4240 | 18450 | 0.7018 | - |
| 0.4252 | 18500 | 0.0137 | - |
| 0.4263 | 18550 | 0.0053 | - |
| 0.4275 | 18600 | 0.0307 | - |
| 0.4286 | 18650 | 0.0127 | - |
| 0.4298 | 18700 | 0.2351 | - |
| 0.4309 | 18750 | 0.0047 | - |
| 0.4321 | 18800 | 0.0114 | - |
| 0.4332 | 18850 | 0.0153 | - |
| 0.4344 | 18900 | 0.3732 | - |
| 0.4355 | 18950 | 0.77 | - |
| 0.4367 | 19000 | 0.1298 | - |
| 0.4378 | 19050 | 0.7064 | - |
| 0.4390 | 19100 | 0.0 | - |
| 0.4401 | 19150 | 0.0044 | - |
| 0.4413 | 19200 | 0.7627 | - |
| 0.4424 | 19250 | 0.556 | - |
| 0.4436 | 19300 | 0.2105 | - |
| 0.4447 | 19350 | 0.8194 | - |
| 0.4459 | 19400 | 0.027 | - |
| 0.4470 | 19450 | 0.9308 | - |
| 0.4482 | 19500 | 0.0194 | - |
| 0.4493 | 19550 | 0.0144 | - |
| 0.4505 | 19600 | 0.584 | - |
| 0.4516 | 19650 | 0.0042 | - |
| 0.4527 | 19700 | 0.1354 | - |
| 0.4539 | 19750 | 0.2151 | - |
| 0.4550 | 19800 | 0.0006 | - |
| 0.4562 | 19850 | 0.3085 | - |
| 0.4573 | 19900 | 0.0543 | - |
| 0.4585 | 19950 | 0.0178 | - |
| 0.4596 | 20000 | 0.418 | - |
| 0.4608 | 20050 | 0.019 | - |
| 0.4619 | 20100 | 0.0001 | - |
| 0.4631 | 20150 | 0.5443 | - |
| 0.4642 | 20200 | 0.5111 | - |
| 0.4654 | 20250 | 0.0594 | - |
| 0.4665 | 20300 | 0.0086 | - |
| 0.4677 | 20350 | 0.0064 | - |
| 0.4688 | 20400 | 0.0577 | - |
| 0.4700 | 20450 | 0.0712 | - |
| 0.4711 | 20500 | 0.0271 | - |
| 0.4723 | 20550 | 0.5118 | - |
| 0.4734 | 20600 | 0.1834 | - |
| 0.4746 | 20650 | 0.0116 | - |
| 0.4757 | 20700 | 0.0052 | - |
| 0.4769 | 20750 | 0.7975 | - |
| 0.4780 | 20800 | 0.3037 | - |
| 0.4792 | 20850 | 0.0264 | - |
| 0.4803 | 20900 | 0.6911 | - |
| 0.4815 | 20950 | 0.008 | - |
| 0.4826 | 21000 | 0.0041 | - |
| 0.4838 | 21050 | 0.0379 | - |
| 0.4849 | 21100 | 0.0033 | - |
| 0.4861 | 21150 | 0.0297 | - |
| 0.4872 | 21200 | 0.0147 | - |
| 0.4884 | 21250 | 0.0001 | - |
| 0.4895 | 21300 | 0.0047 | - |
| 0.4907 | 21350 | 0.0247 | - |
| 0.4918 | 21400 | 0.0059 | - |
| 0.4930 | 21450 | 0.5724 | - |
| 0.4941 | 21500 | 0.3113 | - |
| 0.4953 | 21550 | 0.0026 | - |
| 0.4964 | 21600 | 0.835 | - |
| 0.4976 | 21650 | 0.0007 | - |
| 0.4987 | 21700 | 0.029 | - |
| 0.4999 | 21750 | 0.707 | - |
| 0.5010 | 21800 | 0.0211 | - |
| 0.5022 | 21850 | 0.0071 | - |
| 0.5033 | 21900 | 0.0009 | - |
| 0.5045 | 21950 | 0.0319 | - |
| 0.5056 | 22000 | 0.2219 | - |
| 0.5068 | 22050 | 0.0244 | - |
| 0.5079 | 22100 | 0.0341 | - |
| 0.5091 | 22150 | 0.0372 | - |
| 0.5102 | 22200 | 0.3981 | - |
| 0.5114 | 22250 | 0.0627 | - |
| 0.5125 | 22300 | 0.0559 | - |
| 0.5137 | 22350 | 0.5366 | - |
| 0.5148 | 22400 | 0.6952 | - |
| 0.5159 | 22450 | 0.0504 | - |
| 0.5171 | 22500 | 0.5098 | - |
| 0.5182 | 22550 | 0.6538 | - |
| 0.5194 | 22600 | 0.0015 | - |
| 0.5205 | 22650 | 0.0005 | - |
| 0.5217 | 22700 | 0.0974 | - |
| 0.5228 | 22750 | 0.009 | - |
| 0.5240 | 22800 | 0.6559 | - |
| 0.5251 | 22850 | 0.026 | - |
| 0.5263 | 22900 | 0.0049 | - |
| 0.5274 | 22950 | 0.0104 | - |
| 0.5286 | 23000 | 0.7918 | - |
| 0.5297 | 23050 | 0.0007 | - |
| 0.5309 | 23100 | 0.0015 | - |
| 0.5320 | 23150 | 0.2873 | - |
| 0.5332 | 23200 | 0.002 | - |
| 0.5343 | 23250 | 0.0067 | - |
| 0.5355 | 23300 | 0.2943 | - |
| 0.5366 | 23350 | 0.0029 | - |
| 0.5378 | 23400 | 0.0 | - |
| 0.5389 | 23450 | 0.0727 | - |
| 0.5401 | 23500 | 0.0084 | - |
| 0.5412 | 23550 | 0.0 | - |
| 0.5424 | 23600 | 0.0054 | - |
| 0.5435 | 23650 | 0.0004 | - |
| 0.5447 | 23700 | 0.5525 | - |
| 0.5458 | 23750 | 0.0251 | - |
| 0.5470 | 23800 | 0.0269 | - |
| 0.5481 | 23850 | 0.7426 | - |
| 0.5493 | 23900 | 0.0016 | - |
| 0.5504 | 23950 | 0.8143 | - |
| 0.5516 | 24000 | 0.5158 | - |
| 0.5527 | 24050 | 0.0047 | - |
| 0.5539 | 24100 | 0.0067 | - |
| 0.5550 | 24150 | 0.0 | - |
| 0.5562 | 24200 | 0.0045 | - |
| 0.5573 | 24250 | 0.0021 | - |
| 0.5585 | 24300 | 0.0012 | - |
| 0.5596 | 24350 | 0.3501 | - |
| 0.5608 | 24400 | 0.0101 | - |
| 0.5619 | 24450 | 0.0008 | - |
| 0.5631 | 24500 | 0.0112 | - |
| 0.5642 | 24550 | 0.0148 | - |
| 0.5654 | 24600 | 0.2246 | - |
| 0.5665 | 24650 | 0.1538 | - |
| 0.5677 | 24700 | 0.0001 | - |
| 0.5688 | 24750 | 0.0001 | - |
| 0.5700 | 24800 | 0.1296 | - |
| 0.5711 | 24850 | 0.0101 | - |
| 0.5723 | 24900 | 0.0032 | - |
| 0.5734 | 24950 | 0.0714 | - |
| 0.5746 | 25000 | 0.0 | - |
| 0.5757 | 25050 | 0.0886 | - |
| 0.5769 | 25100 | 0.0003 | - |
| 0.5780 | 25150 | 0.0041 | - |
| 0.5792 | 25200 | 0.0151 | - |
| 0.5803 | 25250 | 0.0099 | - |
| 0.5814 | 25300 | 0.0008 | - |
| 0.5826 | 25350 | 0.028 | - |
| 0.5837 | 25400 | 0.1064 | - |
| 0.5849 | 25450 | 0.0373 | - |
| 0.5860 | 25500 | 0.5589 | - |
| 0.5872 | 25550 | 0.2522 | - |
| 0.5883 | 25600 | 0.8553 | - |
| 0.5895 | 25650 | 0.0004 | - |
| 0.5906 | 25700 | 0.6575 | - |
| 0.5918 | 25750 | 0.0034 | - |
| 0.5929 | 25800 | 0.7313 | - |
| 0.5941 | 25850 | 0.8363 | - |
| 0.5952 | 25900 | 0.0156 | - |
| 0.5964 | 25950 | 0.0044 | - |
| 0.5975 | 26000 | 0.1387 | - |
| 0.5987 | 26050 | 0.0487 | - |
| 0.5998 | 26100 | 0.001 | - |
| 0.6010 | 26150 | 0.0004 | - |
| 0.6021 | 26200 | 0.0071 | - |
| 0.6033 | 26250 | 0.0012 | - |
| 0.6044 | 26300 | 0.021 | - |
| 0.6056 | 26350 | 0.0212 | - |
| 0.6067 | 26400 | 0.8472 | - |
| 0.6079 | 26450 | 0.5686 | - |
| 0.6090 | 26500 | 0.0721 | - |
| 0.6102 | 26550 | 0.0235 | - |
| 0.6113 | 26600 | 0.0 | - |
| 0.6125 | 26650 | 0.0098 | - |
| 0.6136 | 26700 | 0.3805 | - |
| 0.6148 | 26750 | 0.0525 | - |
| 0.6159 | 26800 | 0.0139 | - |
| 0.6171 | 26850 | 0.0011 | - |
| 0.6182 | 26900 | 0.0013 | - |
| 0.6194 | 26950 | 0.0058 | - |
| 0.6205 | 27000 | 0.0581 | - |
| 0.6217 | 27050 | 0.477 | - |
| 0.6228 | 27100 | 0.0073 | - |
| 0.6240 | 27150 | 0.0033 | - |
| 0.6251 | 27200 | 0.0082 | - |
| 0.6263 | 27250 | 0.0028 | - |
| 0.6274 | 27300 | 0.0001 | - |
| 0.6286 | 27350 | 0.0265 | - |
| 0.6297 | 27400 | 0.097 | - |
| 0.6309 | 27450 | 0.2339 | - |
| 0.6320 | 27500 | 0.5429 | - |
| 0.6332 | 27550 | 0.3859 | - |
| 0.6343 | 27600 | 0.0116 | - |
| 0.6355 | 27650 | 0.0006 | - |
| 0.6366 | 27700 | 0.0018 | - |
| 0.6378 | 27750 | 0.0197 | - |
| 0.6389 | 27800 | 0.0085 | - |
| 0.6401 | 27850 | 0.0 | - |
| 0.6412 | 27900 | 0.0141 | - |
| 0.6424 | 27950 | 0.1121 | - |
| 0.6435 | 28000 | 0.0123 | - |
| 0.6446 | 28050 | 0.3018 | - |
| 0.6458 | 28100 | 0.7669 | - |
| 0.6469 | 28150 | 0.6745 | - |
| 0.6481 | 28200 | 0.4283 | - |
| 0.6492 | 28250 | 0.0237 | - |
| 0.6504 | 28300 | 0.8327 | - |
| 0.6515 | 28350 | 0.1052 | - |
| 0.6527 | 28400 | 0.4264 | - |
| 0.6538 | 28450 | 0.6714 | - |
| 0.6550 | 28500 | 0.0039 | - |
| 0.6561 | 28550 | 0.0065 | - |
| 0.6573 | 28600 | 0.0178 | - |
| 0.6584 | 28650 | 0.3817 | - |
| 0.6596 | 28700 | 0.0584 | - |
| 0.6607 | 28750 | 0.0217 | - |
| 0.6619 | 28800 | 0.0019 | - |
| 0.6630 | 28850 | 0.4605 | - |
| 0.6642 | 28900 | 0.0049 | - |
| 0.6653 | 28950 | 0.0011 | - |
| 0.6665 | 29000 | 0.569 | - |
| 0.6676 | 29050 | 0.0 | - |
| 0.6688 | 29100 | 0.0874 | - |
| 0.6699 | 29150 | 0.5388 | - |
| 0.6711 | 29200 | 0.4093 | - |
| 0.6722 | 29250 | 0.3076 | - |
| 0.6734 | 29300 | 0.4542 | - |
| 0.6745 | 29350 | 0.2569 | - |
| 0.6757 | 29400 | 0.0155 | - |
| 0.6768 | 29450 | 0.1146 | - |
| 0.6780 | 29500 | 0.1341 | - |
| 0.6791 | 29550 | 0.0304 | - |
| 0.6803 | 29600 | 0.0095 | - |
| 0.6814 | 29650 | 0.443 | - |
| 0.6826 | 29700 | 0.5068 | - |
| 0.6837 | 29750 | 0.024 | - |
| 0.6849 | 29800 | 0.0079 | - |
| 0.6860 | 29850 | 0.1769 | - |
| 0.6872 | 29900 | 0.0001 | - |
| 0.6883 | 29950 | 0.0104 | - |
| 0.6895 | 30000 | 0.4234 | - |
| 0.6906 | 30050 | 0.0042 | - |
| 0.6918 | 30100 | 0.3934 | - |
| 0.6929 | 30150 | 0.0119 | - |
| 0.6941 | 30200 | 0.0012 | - |
| 0.6952 | 30250 | 0.4434 | - |
| 0.6964 | 30300 | 0.6101 | - |
| 0.6975 | 30350 | 0.3655 | - |
| 0.6987 | 30400 | 0.168 | - |
| 0.6998 | 30450 | 0.8202 | - |
| 0.7010 | 30500 | 0.0906 | - |
| 0.7021 | 30550 | 0.0287 | - |
| 0.7033 | 30600 | 0.3671 | - |
| 0.7044 | 30650 | 0.7084 | - |
| 0.7056 | 30700 | 0.3632 | - |
| 0.7067 | 30750 | 0.0027 | - |
| 0.7079 | 30800 | 0.0451 | - |
| 0.7090 | 30850 | 0.3421 | - |
| 0.7101 | 30900 | 0.0077 | - |
| 0.7113 | 30950 | 0.0404 | - |
| 0.7124 | 31000 | 0.7512 | - |
| 0.7136 | 31050 | 0.2898 | - |
| 0.7147 | 31100 | 0.0721 | - |
| 0.7159 | 31150 | 0.009 | - |
| 0.7170 | 31200 | 0.0474 | - |
| 0.7182 | 31250 | 0.0041 | - |
| 0.7193 | 31300 | 0.0249 | - |
| 0.7205 | 31350 | 0.3519 | - |
| 0.7216 | 31400 | 0.0936 | - |
| 0.7228 | 31450 | 0.0049 | - |
| 0.7239 | 31500 | 0.0035 | - |
| 0.7251 | 31550 | 0.0296 | - |
| 0.7262 | 31600 | 0.0264 | - |
| 0.7274 | 31650 | 0.5318 | - |
| 0.7285 | 31700 | 0.0029 | - |
| 0.7297 | 31750 | 0.7741 | - |
| 0.7308 | 31800 | 0.0807 | - |
| 0.7320 | 31850 | 0.0154 | - |
| 0.7331 | 31900 | 0.0181 | - |
| 0.7343 | 31950 | 0.7881 | - |
| 0.7354 | 32000 | 0.2723 | - |
| 0.7366 | 32050 | 0.0549 | - |
| 0.7377 | 32100 | 0.0198 | - |
| 0.7389 | 32150 | 0.0083 | - |
| 0.7400 | 32200 | 0.4985 | - |
| 0.7412 | 32250 | 0.0111 | - |
| 0.7423 | 32300 | 0.0057 | - |
| 0.7435 | 32350 | 0.0393 | - |
| 0.7446 | 32400 | 0.0786 | - |
| 0.7458 | 32450 | 0.1888 | - |
| 0.7469 | 32500 | 0.0382 | - |
| 0.7481 | 32550 | 0.5611 | - |
| 0.7492 | 32600 | 0.0749 | - |
| 0.7504 | 32650 | 0.0064 | - |
| 0.7515 | 32700 | 0.0002 | - |
| 0.7527 | 32750 | 0.0159 | - |
| 0.7538 | 32800 | 0.025 | - |
| 0.7550 | 32850 | 0.0271 | - |
| 0.7561 | 32900 | 0.251 | - |
| 0.7573 | 32950 | 0.0002 | - |
| 0.7584 | 33000 | 0.1407 | - |
| 0.7596 | 33050 | 0.1596 | - |
| 0.7607 | 33100 | 0.0069 | - |
| 0.7619 | 33150 | 0.0655 | - |
| 0.7630 | 33200 | 0.0435 | - |
| 0.7642 | 33250 | 0.0032 | - |
| 0.7653 | 33300 | 0.1908 | - |
| 0.7665 | 33350 | 0.4326 | - |
| 0.7676 | 33400 | 0.1699 | - |
| 0.7688 | 33450 | 0.005 | - |
| 0.7699 | 33500 | 0.4937 | - |
| 0.7711 | 33550 | 0.0635 | - |
| 0.7722 | 33600 | 0.0042 | - |
| 0.7733 | 33650 | 0.0001 | - |
| 0.7745 | 33700 | 0.0088 | - |
| 0.7756 | 33750 | 0.0313 | - |
| 0.7768 | 33800 | 0.0072 | - |
| 0.7779 | 33850 | 0.0291 | - |
| 0.7791 | 33900 | 0.0037 | - |
| 0.7802 | 33950 | 0.0192 | - |
| 0.7814 | 34000 | 0.0017 | - |
| 0.7825 | 34050 | 0.0006 | - |
| 0.7837 | 34100 | 0.0119 | - |
| 0.7848 | 34150 | 0.1647 | - |
| 0.7860 | 34200 | 0.009 | - |
| 0.7871 | 34250 | 0.0004 | - |
| 0.7883 | 34300 | 0.5268 | - |
| 0.7894 | 34350 | 0.0523 | - |
| 0.7906 | 34400 | 0.0537 | - |
| 0.7917 | 34450 | 0.1654 | - |
| 0.7929 | 34500 | 0.0003 | - |
| 0.7940 | 34550 | 0.0021 | - |
| 0.7952 | 34600 | 0.0016 | - |
| 0.7963 | 34650 | 0.0002 | - |
| 0.7975 | 34700 | 0.0001 | - |
| 0.7986 | 34750 | 0.0001 | - |
| 0.7998 | 34800 | 0.0204 | - |
| 0.8009 | 34850 | 0.0047 | - |
| 0.8021 | 34900 | 0.2942 | - |
| 0.8032 | 34950 | 0.0039 | - |
| 0.8044 | 35000 | 0.0237 | - |
| 0.8055 | 35050 | 0.0002 | - |
| 0.8067 | 35100 | 0.0009 | - |
| 0.8078 | 35150 | 0.7804 | - |
| 0.8090 | 35200 | 0.0012 | - |
| 0.8101 | 35250 | 0.0303 | - |
| 0.8113 | 35300 | 0.0265 | - |
| 0.8124 | 35350 | 0.0071 | - |
| 0.8136 | 35400 | 0.0053 | - |
| 0.8147 | 35450 | 0.068 | - |
| 0.8159 | 35500 | 0.0233 | - |
| 0.8170 | 35550 | 0.4748 | - |
| 0.8182 | 35600 | 0.0253 | - |
| 0.8193 | 35650 | 0.0 | - |
| 0.8205 | 35700 | 0.2029 | - |
| 0.8216 | 35750 | 0.0063 | - |
| 0.8228 | 35800 | 0.0179 | - |
| 0.8239 | 35850 | 0.0039 | - |
| 0.8251 | 35900 | 0.0123 | - |
| 0.8262 | 35950 | 0.3021 | - |
| 0.8274 | 36000 | 0.0096 | - |
| 0.8285 | 36050 | 0.3735 | - |
| 0.8297 | 36100 | 0.0281 | - |
| 0.8308 | 36150 | 0.0612 | - |
| 0.8320 | 36200 | 0.028 | - |
| 0.8331 | 36250 | 0.6296 | - |
| 0.8343 | 36300 | 0.1161 | - |
| 0.8354 | 36350 | 0.0249 | - |
| 0.8366 | 36400 | 0.0 | - |
| 0.8377 | 36450 | 0.4144 | - |
| 0.8388 | 36500 | 0.1574 | - |
| 0.8400 | 36550 | 0.0083 | - |
| 0.8411 | 36600 | 0.0385 | - |
| 0.8423 | 36650 | 0.4681 | - |
| 0.8434 | 36700 | 0.0628 | - |
| 0.8446 | 36750 | 0.0005 | - |
| 0.8457 | 36800 | 0.2092 | - |
| 0.8469 | 36850 | 0.009 | - |
| 0.8480 | 36900 | 0.031 | - |
| 0.8492 | 36950 | 0.3659 | - |
| 0.8503 | 37000 | 0.0003 | - |
| 0.8515 | 37050 | 0.0117 | - |
| 0.8526 | 37100 | 0.0061 | - |
| 0.8538 | 37150 | 0.0163 | - |
| 0.8549 | 37200 | 0.0 | - |
| 0.8561 | 37250 | 0.0668 | - |
| 0.8572 | 37300 | 0.0108 | - |
| 0.8584 | 37350 | 0.1344 | - |
| 0.8595 | 37400 | 0.0196 | - |
| 0.8607 | 37450 | 0.0006 | - |
| 0.8618 | 37500 | 0.0005 | - |
| 0.8630 | 37550 | 0.45 | - |
| 0.8641 | 37600 | 0.0002 | - |
| 0.8653 | 37650 | 0.0032 | - |
| 0.8664 | 37700 | 0.0035 | - |
| 0.8676 | 37750 | 0.1411 | - |
| 0.8687 | 37800 | 0.007 | - |
| 0.8699 | 37850 | 0.0015 | - |
| 0.8710 | 37900 | 0.6745 | - |
| 0.8722 | 37950 | 0.0002 | - |
| 0.8733 | 38000 | 0.2138 | - |
| 0.8745 | 38050 | 0.0092 | - |
| 0.8756 | 38100 | 0.4335 | - |
| 0.8768 | 38150 | 0.0011 | - |
| 0.8779 | 38200 | 0.0265 | - |
| 0.8791 | 38250 | 0.6394 | - |
| 0.8802 | 38300 | 0.3108 | - |
| 0.8814 | 38350 | 0.1918 | - |
| 0.8825 | 38400 | 0.0006 | - |
| 0.8837 | 38450 | 0.0075 | - |
| 0.8848 | 38500 | 0.5738 | - |
| 0.8860 | 38550 | 0.008 | - |
| 0.8871 | 38600 | 0.0043 | - |
| 0.8883 | 38650 | 0.7087 | - |
| 0.8894 | 38700 | 0.0044 | - |
| 0.8906 | 38750 | 0.0045 | - |
| 0.8917 | 38800 | 0.0009 | - |
| 0.8929 | 38850 | 0.0118 | - |
| 0.8940 | 38900 | 0.2812 | - |
| 0.8952 | 38950 | 0.0581 | - |
| 0.8963 | 39000 | 0.0016 | - |
| 0.8975 | 39050 | 0.0284 | - |
| 0.8986 | 39100 | 0.0061 | - |
| 0.8998 | 39150 | 0.13 | - |
| 0.9009 | 39200 | 0.0061 | - |
| 0.9021 | 39250 | 0.0508 | - |
| 0.9032 | 39300 | 0.214 | - |
| 0.9043 | 39350 | 0.0032 | - |
| 0.9055 | 39400 | 0.0234 | - |
| 0.9066 | 39450 | 0.0318 | - |
| 0.9078 | 39500 | 0.003 | - |
| 0.9089 | 39550 | 0.3719 | - |
| 0.9101 | 39600 | 0.0092 | - |
| 0.9112 | 39650 | 0.0027 | - |
| 0.9124 | 39700 | 0.3007 | - |
| 0.9135 | 39750 | 0.0535 | - |
| 0.9147 | 39800 | 0.0027 | - |
| 0.9158 | 39850 | 0.8316 | - |
| 0.9170 | 39900 | 0.3543 | - |
| 0.9181 | 39950 | 0.7228 | - |
| 0.9193 | 40000 | 0.4475 | - |
| 0.9204 | 40050 | 0.0044 | - |
| 0.9216 | 40100 | 0.0077 | - |
| 0.9227 | 40150 | 0.0668 | - |
| 0.9239 | 40200 | 0.0036 | - |
| 0.9250 | 40250 | 0.0032 | - |
| 0.9262 | 40300 | 0.035 | - |
| 0.9273 | 40350 | 0.011 | - |
| 0.9285 | 40400 | 0.0 | - |
| 0.9296 | 40450 | 0.5078 | - |
| 0.9308 | 40500 | 0.0003 | - |
| 0.9319 | 40550 | 0.0 | - |
| 0.9331 | 40600 | 0.0 | - |
| 0.9342 | 40650 | 0.0029 | - |
| 0.9354 | 40700 | 0.0001 | - |
| 0.9365 | 40750 | 0.0003 | - |
| 0.9377 | 40800 | 0.2938 | - |
| 0.9388 | 40850 | 0.0059 | - |
| 0.9400 | 40900 | 0.0646 | - |
| 0.9411 | 40950 | 0.0067 | - |
| 0.9423 | 41000 | 0.001 | - |
| 0.9434 | 41050 | 0.7928 | - |
| 0.9446 | 41100 | 0.0013 | - |
| 0.9457 | 41150 | 0.0271 | - |
| 0.9469 | 41200 | 0.0322 | - |
| 0.9480 | 41250 | 0.0127 | - |
| 0.9492 | 41300 | 0.0 | - |
| 0.9503 | 41350 | 0.4948 | - |
| 0.9515 | 41400 | 0.0185 | - |
| 0.9526 | 41450 | 0.4775 | - |
| 0.9538 | 41500 | 0.0046 | - |
| 0.9549 | 41550 | 0.0002 | - |
| 0.9561 | 41600 | 0.352 | - |
| 0.9572 | 41650 | 0.5607 | - |
| 0.9584 | 41700 | 0.0003 | - |
| 0.9595 | 41750 | 0.1911 | - |
| 0.9607 | 41800 | 0.0117 | - |
| 0.9618 | 41850 | 0.0008 | - |
| 0.9630 | 41900 | 0.0029 | - |
| 0.9641 | 41950 | 0.0034 | - |
| 0.9653 | 42000 | 0.0128 | - |
| 0.9664 | 42050 | 0.3599 | - |
| 0.9675 | 42100 | 0.5342 | - |
| 0.9687 | 42150 | 0.0333 | - |
| 0.9698 | 42200 | 0.0358 | - |
| 0.9710 | 42250 | 0.0039 | - |
| 0.9721 | 42300 | 0.0001 | - |
| 0.9733 | 42350 | 0.0066 | - |
| 0.9744 | 42400 | 0.0006 | - |
| 0.9756 | 42450 | 0.0005 | - |
| 0.9767 | 42500 | 0.5468 | - |
| 0.9779 | 42550 | 0.0121 | - |
| 0.9790 | 42600 | 0.0833 | - |
| 0.9802 | 42650 | 0.0152 | - |
| 0.9813 | 42700 | 0.001 | - |
| 0.9825 | 42750 | 0.0074 | - |
| 0.9836 | 42800 | 0.8221 | - |
| 0.9848 | 42850 | 0.0039 | - |
| 0.9859 | 42900 | 0.1647 | - |
| 0.9871 | 42950 | 0.0014 | - |
| 0.9882 | 43000 | 0.0006 | - |
| 0.9894 | 43050 | 0.0008 | - |
| 0.9905 | 43100 | 0.0 | - |
| 0.9917 | 43150 | 0.1409 | - |
| 0.9928 | 43200 | 0.0004 | - |
| 0.9940 | 43250 | 0.0006 | - |
| 0.9951 | 43300 | 0.0634 | - |
| 0.9963 | 43350 | 0.1843 | - |
| 0.9974 | 43400 | 0.0133 | - |
| 0.9986 | 43450 | 0.2553 | - |
| 0.9997 | 43500 | 0.0005 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.5.1
- Transformers: 4.38.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.18.0
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
CurtisJeon/OrionStarAI-Orion-14B-Base-4bit | CurtisJeon | 2024-03-10T19:57:48Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"orion",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-03-10T19:53:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Holarissun/phi2-aisft-hh-seqsampler-subset10000 | Holarissun | 2024-03-10T19:57:32Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-03-10T19:57:24Z | ---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: microsoft/phi-2
model-index:
- name: phi2-aisft-hh-seqsampler-subset10000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi2-aisft-hh-seqsampler-subset10000
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
Holarissun/phi2-aisft-hh-randsampler-subset10000 | Holarissun | 2024-03-10T19:56:14Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-03-10T19:56:06Z | ---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: microsoft/phi-2
model-index:
- name: phi2-aisft-hh-randsampler-subset10000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi2-aisft-hh-randsampler-subset10000
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
Benevolent/PonyDiffusionV10 | Benevolent | 2024-03-10T19:55:58Z | 48 | 2 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] | text-to-image | 2024-03-10T18:30:27Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/ba765138-484b-4f2d-bc58-ab0cdf1f6337.webp
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: null
license: mit
---
# PonyDiffusionForV10
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Benevolent/PonyDiffusionV10/tree/main) them in the Files & versions tab.
|
xKizzi/q-FrozenLake-v1-4x4-noSlippery | xKizzi | 2024-03-10T19:48:49Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-10T19:48:47Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="xKizzi/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
omarelsayeed/QWEN-2B-Instruction-Tuned-ServiceCodes | omarelsayeed | 2024-03-10T19:33:29Z | 73 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T19:31:03Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kajol/mistral_math_expert_v01 | kajol | 2024-03-10T19:28:57Z | 2 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"region:us"
] | null | 2024-03-10T19:28:13Z | ---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 |
AlexandreManai/Reinforce-Pixelcopter-PLE-v0 | AlexandreManai | 2024-03-10T19:25:30Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-10T14:10:22Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 79.90 +/- 45.47
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Subsets and Splits