modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
grace-pro/oops_i_did_it_again_eval_hans_full_set | grace-pro | 2024-03-11T02:56:38Z | 2 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-03-11T02:55:26Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
metrics:
- precision
- recall
- accuracy
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: oops_i_did_it_again_eval_hans_full_set
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# oops_i_did_it_again_eval_hans_full_set
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8314
- Precision: 0.7598
- Recall: 0.2665
- F1-score: 0.3946
- Accuracy: 0.5911
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1-score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:--------:|:--------:|
| 0.5414 | 1.0 | 24544 | 1.8314 | 0.7598 | 0.2665 | 0.3946 | 0.5911 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
essiam/clean_art_cat | essiam | 2024-03-11T02:55:52Z | 0 | 1 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-03-11T02:44:09Z | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- text-to-image
- dreambooth
- stable-diffusion
- stable-diffusion-diffusers
inference: true
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of 686HenrietteRonnerKnip859 cat
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - essiam/clean_art_cat
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of 686HenrietteRonnerKnip859 cat using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: True.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
jeonsiyun/layoutlmv3-v29-epoch5 | jeonsiyun | 2024-03-11T02:45:47Z | 117 | 0 | transformers | [
"transformers",
"safetensors",
"layoutlmv3",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-11T02:45:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Reemvn/distilroberta-base | Reemvn | 2024-03-11T02:43:33Z | 46 | 0 | transformers | [
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-10T23:41:57Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: Reemvn/distilroberta-base
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Reemvn/distilroberta-base
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0947
- Validation Loss: 0.1512
- Train Accuracy: 0.9455
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 5000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.1343 | 0.1610 | 0.945 | 0 |
| 0.1097 | 0.1589 | 0.949 | 1 |
| 0.0947 | 0.1512 | 0.9455 | 2 |
### Framework versions
- Transformers 4.38.2
- TensorFlow 2.15.0
- Datasets 2.18.0
- Tokenizers 0.15.2
|
lilyray/albert_emotion | lilyray | 2024-03-11T02:28:15Z | 121 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"albert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:lilyray/albert_emotion",
"base_model:finetune:lilyray/albert_emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-10T17:30:03Z | ---
license: apache-2.0
base_model: lilyray/albert_emotion
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: albert_emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9295
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert_emotion
This model is a fine-tuned version of [lilyray/albert_emotion](https://huggingface.co/lilyray/albert_emotion) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2391
- Accuracy: 0.9295
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9.363600088100325e-06
- train_batch_size: 4
- eval_batch_size: 8
- seed: 19
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1744 | 1.0 | 4000 | 0.2001 | 0.938 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
michaelnoyes/thedistil | michaelnoyes | 2024-03-11T02:24:37Z | 0 | 0 | null | [
"region:us"
] | null | 2024-03-10T20:45:37Z | Within the repository I have untrained models by their names and then the fine-tuned versions.
I also included the code that would be used for the quantitavtive analysis table which i was
not able to run. |
DaRkSpyro/GabiRio2 | DaRkSpyro | 2024-03-11T02:22:37Z | 0 | 0 | flair | [
"flair",
"music",
"en",
"dataset:HuggingFaceTB/cosmopedia",
"license:apache-2.0",
"region:us"
] | null | 2024-03-11T02:19:38Z | ---
license: apache-2.0
datasets:
- HuggingFaceTB/cosmopedia
language:
- en
metrics:
- accuracy
library_name: flair
tags:
- music
--- |
asedmammad/Contextual_KTO_Mistral_PairRM-GGUF | asedmammad | 2024-03-11T01:54:18Z | 83 | 2 | null | [
"gguf",
"kto",
"dpo",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"rl",
"rlaif",
"en",
"dataset:snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset",
"arxiv:2402.01306",
"base_model:ContextualAI/Contextual_KTO_Mistral_PairRM",
"base_model:quantized:ContextualAI/Contextual_KTO_Mistral_PairRM",
"license:apache-2.0",
"region:us",
"conversational"
] | null | 2024-03-10T22:07:16Z | ---
base_model: ContextualAI/Contextual_KTO_Mistral_PairRM
inference: false
language:
- en
license: apache-2.0
tags:
- kto
- dpo
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- rl
- rlaif
datasets:
- snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset
metrics:
- accuracy
model_creator: ContextualAI
model_name: Contextual KTO Mistral PairRM
model_type: mistral
prompt_template: '<|user|>
{prompt}
<|assistant|>
'
quantized_by: Ased Mammad
---
# Contextual_KTO_Mistral_PairRM - GGUF
- Model creator: [ContextualAI](https://huggingface.co/ContextualAI)
- Original model: [Contextual_KTO_Mistral_PairRM](https://huggingface.co/ContextualAI/Contextual_KTO_Mistral_PairRM)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Contextual_KTO_Mistral_PairRM](https://huggingface.co/ContextualAI/Contextual_KTO_Mistral_PairRM).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|user|>
{prompt}
<|assistant|>
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [Contextual_KTO_Mistral_PairRM.Q2_K.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q2_K.gguf) | Q2_K | 2 | 2.72 GB| 5.22 GB | significant quality loss - not recommended for most purposes |
| [Contextual_KTO_Mistral_PairRM.Q3_K_S.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [Contextual_KTO_Mistral_PairRM.Q3_K_M.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [Contextual_KTO_Mistral_PairRM.Q3_K_L.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [Contextual_KTO_Mistral_PairRM.Q4_0.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Contextual_KTO_Mistral_PairRM.Q4_K_S.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [Contextual_KTO_Mistral_PairRM.Q4_K_M.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [Contextual_KTO_Mistral_PairRM.Q5_0.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Contextual_KTO_Mistral_PairRM.Q5_K_S.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [Contextual_KTO_Mistral_PairRM.Q5_K_M.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [Contextual_KTO_Mistral_PairRM.Q6_K.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [Contextual_KTO_Mistral_PairRM.Q8_0.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF and below it, a specific filename to download, such as: Contextual_KTO_Mistral_PairRM.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF Contextual_KTO_Mistral_PairRM.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF Contextual_KTO_Mistral_PairRM.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Contextual_KTO_Mistral_PairRM.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
```
Change `-ngl 35` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Contextual_KTO_Mistral_PairRM.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|user|>\n{prompt}<|assistant|>\n", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Contextual_KTO_Mistral_PairRM.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- original-model-card start -->
This repo contains the model and tokenizer checkpoints for:
- model family [<b>mistralai/Mistral-7B-Instruct-v0.2</b>](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
- optimized with the loss [<b>KTO</b>](https://twitter.com/winniethexu/status/1732839295365554643)
- aligned using the [snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset](https://huggingface.co/datasets/snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset)
- via 3 iterations of KTO on one epoch of each training partition, each previous iteration's model serving as the reference for the subsequent.
**[03/06/2024]**: We are #2 on the (verified) [Alpaca Eval 2.0 Leaderboard](https://tatsu-lab.github.io/alpaca_eval/) scoring **33.23**!
To prompt this model, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added at tokenization time and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
You may also use our tokenizer's `apply_chat_template` if doing inference with `chatml` set or evaluating generations through non-local clients.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) for more details on the methodology.
If you found this work useful, feel free to cite [our work](https://arxiv.org/abs/2402.01306):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
<!-- original-model-card end -->
|
jeonsiyun/layoutlmv3-v29-epoch25 | jeonsiyun | 2024-03-11T01:54:04Z | 119 | 0 | transformers | [
"transformers",
"safetensors",
"layoutlmv3",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-11T01:53:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
shadowlight2784/Adagio_Dazzle | shadowlight2784 | 2024-03-11T01:51:35Z | 0 | 0 | null | [
"region:us"
] | null | 2023-11-09T05:16:00Z | Use for Retrival-Voice-Conversion (RVC).
https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI |
shadowlight2784/Sonata_Dusk_Singing_Voice | shadowlight2784 | 2024-03-11T01:50:56Z | 0 | 1 | null | [
"region:us"
] | null | 2023-08-25T23:43:01Z | Use for Retrival-Voice-Conversion (RVC).
https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI |
ITT-AF/ITT-Yi-Ko-6B-v6.0 | ITT-AF | 2024-03-11T01:43:50Z | 55 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-07T02:52:58Z | ---
license: cc-by-nc-4.0
---
## ITT-AF/ITT-Yi-Ko-6B-v6.0
This model is a fine-tuned version of [beomi/Yi-Ko-6B](https://huggingface.co/beomi/Yi-Ko-6B) on an custom dataset.
### Model description
More information needed
### Intended uses & limitations
More information needed
### Training and evaluation data
More information needed
### Training procedure
### Training hypuerparameters
The following hyperparameters were used during training:
* learning_rate: 2e-05
* train_batch_size: 4
* eval_batch_size: 8
* seed: 42
* gradient_accumulation_steps: 8
* total_train_batch_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr_scheduler_type: linear
* num_epochs: 1.0
* mixed_precision_training: Native AMP
### Training results
### Framework versions
* Transformers 4.36.2
* Pytorch 2.1.2+cu121
* Datasets 2.0.0
* Tokenizers 0.15.0 |
juhwanlee/llmdo-Mistral-7B-case-7 | juhwanlee | 2024-03-11T01:43:03Z | 48 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"dataset:Open-Orca/OpenOrca",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-11T00:41:21Z | ---
license: apache-2.0
datasets:
- Open-Orca/OpenOrca
language:
- en
---
# Model Details
* Model Description: This model is test for data ordering.
* Developed by: Juhwan Lee
* Model Type: Large Language Model
# Model Architecture
This model is based on Mistral-7B-v0.1. We fine-tuning this model for data ordering task.
Mistral-7B-v0.1 is a transformer model, with the following architecture choices:
* Grouped-Query Attention
* Sliding-Window Attention
* Byte-fallback BPE tokenizer
# Dataset
We random sample Open-Orca dataset. (We finetune the 100,000 dataset)
# Guthub
https://github.com/trailerAI
# License
Apache License 2.0 |
grace-pro/oops_i_did_it_again_eval_hans | grace-pro | 2024-03-11T01:40:10Z | 1 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-03-11T01:38:55Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
metrics:
- precision
- recall
- accuracy
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: oops_i_did_it_again_eval_hans
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# oops_i_did_it_again_eval_hans
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5556
- Precision: 0.8070
- Recall: 0.2481
- F1-score: 0.3796
- Accuracy: 0.5944
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1-score | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:--------:|:--------:|
| 0.6706 | 1.0 | 4909 | 1.5556 | 0.8070 | 0.2481 | 0.3796 | 0.5944 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
furrutiav/bert_qa_extractor_cockatiel_2022_ulra_sign_lf_signal_it_38 | furrutiav | 2024-03-11T01:39:52Z | 91 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-03-11T01:39:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
furrutiav/bert_qa_extractor_cockatiel_2022_ulra_org_lf_signal_it_32 | furrutiav | 2024-03-11T01:39:12Z | 91 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-03-11T01:38:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
wongctroman/fine-tuned-cloudy-sentence-transformer-3 | wongctroman | 2024-03-11T01:30:02Z | 48 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-03-11T01:28:55Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# wongctroman/fine-tuned-cloudy-sentence-transformer-3
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('wongctroman/fine-tuned-cloudy-sentence-transformer-3')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=wongctroman/fine-tuned-cloudy-sentence-transformer-3)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 3 with parameters:
```
{'batch_size': 5, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 100,
"evaluation_steps": 500,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
xia01ongLi/dummy-model2 | xia01ongLi | 2024-03-11T01:28:47Z | 201 | 0 | transformers | [
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-03-11T01:27:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Holarissun/gptj6b-aisft-hh-randsampler-subset2000 | Holarissun | 2024-03-11T01:20:53Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:EleutherAI/gpt-j-6b",
"base_model:adapter:EleutherAI/gpt-j-6b",
"license:apache-2.0",
"region:us"
] | null | 2024-03-11T01:16:40Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: EleutherAI/gpt-j-6b
model-index:
- name: gptj6b-aisft-hh-randsampler-subset2000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gptj6b-aisft-hh-randsampler-subset2000
This model is a fine-tuned version of [EleutherAI/gpt-j-6b](https://huggingface.co/EleutherAI/gpt-j-6b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
tsavage68/mistralit2_250_STEPS_1e7_rate_05_beta_DPO | tsavage68 | 2024-03-11T01:17:30Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-11T01:13:23Z | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.2
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: mistralit2_250_STEPS_1e7_rate_05_beta_DPO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistralit2_250_STEPS_1e7_rate_05_beta_DPO
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6043
- Rewards/chosen: -1.1749
- Rewards/rejected: -1.6856
- Rewards/accuracies: 0.6330
- Rewards/margins: 0.5107
- Logps/rejected: -31.9435
- Logps/chosen: -25.7355
- Logits/rejected: -2.8536
- Logits/chosen: -2.8539
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 250
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6684 | 0.1 | 50 | 0.6660 | -0.2264 | -0.2957 | 0.5934 | 0.0693 | -29.1637 | -23.8386 | -2.8636 | -2.8639 |
| 0.5945 | 0.2 | 100 | 0.6396 | -1.5064 | -1.9635 | 0.6044 | 0.4572 | -32.4994 | -26.3985 | -2.8444 | -2.8447 |
| 0.4857 | 0.29 | 150 | 0.6440 | -2.0003 | -2.6364 | 0.6198 | 0.6362 | -33.8453 | -27.3863 | -2.8460 | -2.8463 |
| 0.5631 | 0.39 | 200 | 0.6018 | -1.1675 | -1.6769 | 0.6330 | 0.5093 | -31.9261 | -25.7209 | -2.8536 | -2.8539 |
| 0.6109 | 0.49 | 250 | 0.6043 | -1.1749 | -1.6856 | 0.6330 | 0.5107 | -31.9435 | -25.7355 | -2.8536 | -2.8539 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.0.0+cu117
- Datasets 2.18.0
- Tokenizers 0.15.2
|
aegunal/FT_IPD_llama7b | aegunal | 2024-03-11T01:11:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-11T01:11:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kaijie-qin/llama-2-7b-kube | kaijie-qin | 2024-03-11T01:04:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-10T08:23:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ozzyonfire/bird-species-classifier | ozzyonfire | 2024-03-11T01:00:42Z | 150 | 0 | transformers | [
"transformers",
"onnx",
"safetensors",
"efficientnet",
"image-classification",
"biology",
"vision",
"en",
"dataset:chriamue/bird-species-dataset",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-03-10T18:57:02Z | ---
license: mit
datasets:
- chriamue/bird-species-dataset
language:
- en
metrics:
- accuracy
library_name: transformers
pipeline_tag: image-classification
tags:
- biology
- image-classification
- vision
model-index:
- name: bird-species-classifier
results:
- task:
type: ImageClassification
dataset:
type: chriamue/bird-species-dataset
name: Bird Species
config: default
split: validation
metrics:
- type: accuracy
value: 96.8
- type: loss
value: 0.1379
---
# Model Card for "Bird Species Classifier"
This model came from chiramue/bird-species-classifier. This has been retrained using ResNet50 in hopes to get it running using Transformers JS.
## Model Description
The "Bird Species Classifier" is a state-of-the-art image classification model designed to identify various bird species from images. It uses the EfficientNet architecture and has been fine-tuned to achieve high accuracy in recognizing a wide range of bird species.
### How to Use
You can easily use the model in your Python environment with the following code:
```python
from transformers import AutoFeatureExtractor, AutoModelForImageClassification
extractor = AutoFeatureExtractor.from_pretrained("chriamue/bird-species-classifier")
model = AutoModelForImageClassification.from_pretrained("chriamue/bird-species-classifier")
```
### Applications
- Bird species identification for educational or ecological research.
- Assistance in biodiversity monitoring and conservation efforts.
- Enhancing user experience in nature apps and platforms.
## Training Data
The model was trained on the "Bird Species" dataset, which is a comprehensive collection of bird images. Key features of this dataset include:
- **Total Species**: 525 bird species.
- **Training Images**: 84,635 images.
- **Validation Images**: 2,625 images.
- **Test Images**: 2,625 images.
- **Image Format**: Color images (224x224x3) in JPG format.
- **Source**: Sourced from Kaggle.
## Training Results
The model achieved impressive results after 6 epochs of training:
- **Accuracy**: 96.8%
- **Loss**: 0.1379
- **Runtime**: 136.81 seconds
- **Samples per Second**: 19.188
- **Steps per Second**: 1.206
- **Total Training Steps**: 31,740
These metrics indicate a high level of performance, making the model reliable for practical applications.
## Limitations and Bias
- The performance of the model might vary under different lighting conditions or image qualities.
- The model's accuracy is dependent on the diversity and representation in the training dataset. It may perform less effectively on bird species not well represented in the dataset.
## Ethical Considerations
This model should be used responsibly, considering privacy and environmental impacts. It should not be used for harmful purposes such as targeting endangered species or violating wildlife protection laws.
## Acknowledgements
We would like to acknowledge the creators of the dataset on Kaggle for providing a rich source of data that made this model possible.
## See also
- [Bird Species Dataset](https://huggingface.co/datasets/chriamue/bird-species-dataset)
- [Kaggle Dataset](https://www.kaggle.com/datasets/gpiosenka/100-bird-species/data)
- [Bird Species Classifier](https://huggingface.co/dennisjooo/Birds-Classifier-EfficientNetB2)
|
ZainAli60/miner_1 | ZainAli60 | 2024-03-11T00:59:16Z | 175 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-11T00:58:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Nandapratama241/path-to-save-model | Nandapratama241 | 2024-03-11T00:47:31Z | 2 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:finetune:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-03-11T00:11:35Z | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
base_model: stabilityai/stable-diffusion-2-1-base
inference: true
instance_prompt: a photo of NAnFRst person
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - Nandapratama241/path-to-save-model
This is a dreambooth model derived from stabilityai/stable-diffusion-2-1-base. The weights were trained on a photo of NAnFRst person using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
Omar23moh/UNIT3 | Omar23moh | 2024-03-11T00:45:47Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-10T14:20:13Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 136.00 +/- 26.81
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Omar23moh -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Omar23moh -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Omar23moh
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 1000),
('n_timesteps', 10000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 100),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
furrutiav/bert_qa_extractor_cockatiel_2022_ulra_sign_adamw_lf_signal_it_1 | furrutiav | 2024-03-11T00:37:50Z | 91 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-03-11T00:37:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Litzy619/V0309O6 | Litzy619 | 2024-03-11T00:26:55Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-03-10T16:43:26Z | ---
license: mit
base_model: microsoft/phi-2
tags:
- generated_from_trainer
model-index:
- name: V0309O6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0309O6
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0631
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 20
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9549 | 0.09 | 10 | 0.7661 |
| 0.3073 | 0.17 | 20 | 0.1105 |
| 0.1318 | 0.26 | 30 | 0.0849 |
| 0.1149 | 0.34 | 40 | 0.0834 |
| 0.1155 | 0.43 | 50 | 0.0803 |
| 0.1048 | 0.51 | 60 | 0.0807 |
| 0.0963 | 0.6 | 70 | 0.0808 |
| 0.0992 | 0.68 | 80 | 0.0777 |
| 0.0893 | 0.77 | 90 | 0.0731 |
| 0.1061 | 0.85 | 100 | 0.0747 |
| 0.098 | 0.94 | 110 | 0.0711 |
| 0.095 | 1.02 | 120 | 0.0699 |
| 0.0908 | 1.11 | 130 | 0.0743 |
| 0.0874 | 1.19 | 140 | 0.0734 |
| 0.083 | 1.28 | 150 | 0.0682 |
| 0.0823 | 1.37 | 160 | 0.0701 |
| 0.0812 | 1.45 | 170 | 0.0684 |
| 0.078 | 1.54 | 180 | 0.0683 |
| 0.0763 | 1.62 | 190 | 0.0671 |
| 0.0763 | 1.71 | 200 | 0.0650 |
| 0.08 | 1.79 | 210 | 0.0634 |
| 0.0686 | 1.88 | 220 | 0.0650 |
| 0.0685 | 1.96 | 230 | 0.0638 |
| 0.074 | 2.05 | 240 | 0.0644 |
| 0.0646 | 2.13 | 250 | 0.0630 |
| 0.0669 | 2.22 | 260 | 0.0675 |
| 0.061 | 2.3 | 270 | 0.0675 |
| 0.0672 | 2.39 | 280 | 0.0635 |
| 0.0687 | 2.47 | 290 | 0.0625 |
| 0.0656 | 2.56 | 300 | 0.0625 |
| 0.0738 | 2.65 | 310 | 0.0626 |
| 0.062 | 2.73 | 320 | 0.0628 |
| 0.0622 | 2.82 | 330 | 0.0631 |
| 0.0632 | 2.9 | 340 | 0.0630 |
| 0.0644 | 2.99 | 350 | 0.0631 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
NorGLM/NorLLama-3B-NO-MRPC-peft | NorGLM | 2024-03-11T00:17:11Z | 0 | 0 | null | [
"no",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2024-03-11T00:15:43Z | ---
license: cc-by-nc-sa-4.0
language:
- 'no'
---
# Model Card
NorLLama-3B-NO-MRPC-peft is trained on top of [NorLLama-3B](https://huggingface.co/NorGLM/NorLLama-3B) model on [NO-MRPC](https://huggingface.co/datasets/NorGLM/NO-MRPC) dataset.
Data format:
```
input: {text_a}[SEP]{text_b}
label: {0, 1}
```
## Run the Model
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
torch_device = "cuda" if torch.cuda.is_available() else "cpu"
source_model_id = "NorGLM/NorLLama-3B"
peft_model_id = "NorGLM/NorLLama-3B-NO-MRPC-peft"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(source_model_id, device_map='balanced')
tokenizer_max_len = 2048
tokenizer_config = {'pretrained_model_name_or_path': source_model_id,
'max_len': tokenizer_max_len}
tokenizer = tokenizer = AutoTokenizer.from_pretrained(**tokenizer_config)
tokenizer.pad_token = tokenizer.eos_token
model = PeftModel.from_pretrained(model, peft_model_id)
```
## Inference Example
Load the model to evaluate on the validation set:
```python
def getDataSetFromFiles(df):
# convert dataset
df["text"] = df[["text_a", "text_b"]].apply(lambda x: " [SEP] ".join(x.astype(str)), axis =1)
df = df.drop(["idx", "text_a", "text_b"], axis=1)
df["label"] = df.label.map({0: 0, 1: 1})
return Dataset.from_pandas(df)
print("--LOADING EVAL DATAS---")
eval_data = load_dataset("NorGLM/NO-MRPC", data_files="val.jsonl")
eval_data = getDataSetFromFiles(eval_data["train"].to_pandas())
print("--MAKING PREDICTIONS---")
model.eval()
y_true = []
y_pred = []
count = 0
for data in eval_data:
count = count + 1
if count % 100 == 0:
print(count)
inputs = tokenizer(data['text'], return_tensors="pt").to(torch_device)
with torch.no_grad():
logits = model(**inputs).logits
#print(logits)
predicted_class_id = logits.argmax().item()
y_true.append(data['label'])
y_pred.append(predicted_class_id)
print(y_pred)
print(f"Lenght of true_values: {len(y_true)}")
print(f"Lenght of predicted_values: {len(y_pred)}")
y_true = np.array(y_true)
y_pred = np.array(y_pred)
F_score = f1_score(y_true, y_pred, average="macro")
print(f"F1 score: {F_score}")
accuracy = accuracy_score(y_true, y_pred)
print(f"Accuracy: {accuracy}")
```
## Note
More training details will be released soon! |
NorGLM/NorGPT-3B-NO-MRPC-peft | NorGLM | 2024-03-11T00:11:59Z | 0 | 0 | null | [
"no",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2024-03-11T00:10:03Z | ---
license: cc-by-nc-sa-4.0
language:
- 'no'
---
# Model Card
NorGPT-3B-NO-MRPC-peft is trained on top of [NorGPT-3B](https://huggingface.co/NorGLM/NorGPT-3B) model on [NO-MRPC](https://huggingface.co/datasets/NorGLM/NO-MRPC) dataset.
Data format:
```
input: {text_a}[SEP]{text_b}
label: {0, 1}
```
## Run the Model
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
torch_device = "cuda" if torch.cuda.is_available() else "cpu"
source_model_id = "NorGLM/NorGPT-3B"
peft_model_id = "NorGLM/NorGPT-3B-NO-MRPC-peft"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(source_model_id, device_map='balanced')
tokenizer_max_len = 2048
tokenizer_config = {'pretrained_model_name_or_path': source_model_id,
'max_len': tokenizer_max_len}
tokenizer = tokenizer = AutoTokenizer.from_pretrained(**tokenizer_config)
tokenizer.pad_token = tokenizer.eos_token
model = PeftModel.from_pretrained(model, peft_model_id)
```
## Inference Example
Load the model to evaluate on the validation set:
```python
def getDataSetFromFiles(df):
# convert dataset
df["text"] = df[["text_a", "text_b"]].apply(lambda x: " [SEP] ".join(x.astype(str)), axis =1)
df = df.drop(["idx", "text_a", "text_b"], axis=1)
df["label"] = df.label.map({0: 0, 1: 1})
return Dataset.from_pandas(df)
print("--LOADING EVAL DATAS---")
eval_data = load_dataset("NorGLM/NO-MRPC", data_files="val.jsonl")
eval_data = getDataSetFromFiles(eval_data["train"].to_pandas())
print("--MAKING PREDICTIONS---")
model.eval()
y_true = []
y_pred = []
count = 0
for data in eval_data:
count = count + 1
if count % 100 == 0:
print(count)
inputs = tokenizer(data['text'], return_tensors="pt").to(torch_device)
with torch.no_grad():
logits = model(**inputs).logits
#print(logits)
predicted_class_id = logits.argmax().item()
y_true.append(data['label'])
y_pred.append(predicted_class_id)
print(y_pred)
print(f"Lenght of true_values: {len(y_true)}")
print(f"Lenght of predicted_values: {len(y_pred)}")
y_true = np.array(y_true)
y_pred = np.array(y_pred)
F_score = f1_score(y_true, y_pred, average="macro")
print(f"F1 score: {F_score}")
accuracy = accuracy_score(y_true, y_pred)
print(f"Accuracy: {accuracy}")
```
## Note
More training details will be released soon! |
NorGLM/NorGPT-3B-NO-QNLI-peft | NorGLM | 2024-03-11T00:09:21Z | 0 | 0 | null | [
"no",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2024-03-10T23:50:10Z | ---
license: cc-by-nc-sa-4.0
language:
- 'no'
---
# Model Card
NorGPT-3B-NO-QNLI-peft is trained on top of [NorGPT-3B](https://huggingface.co/NorGLM/NorGPT-369M) model on [NO-QNLI](https://huggingface.co/datasets/NorGLM/NO-QNLI) dataset.
Data format:
```
input: {premise}[SEP]{hypothesis}
label: {entailment, not_entailment} -> {1,0}
```
## Run the Model
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
torch_device = "cuda" if torch.cuda.is_available() else "cpu"
source_model_id = "NorGLM/NorGPT-3B"
peft_model_id = "NorGLM/NorGPT-3B-NO-QNLI-peft"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(source_model_id, device_map='balanced')
tokenizer_max_len = 2048
tokenizer_config = {'pretrained_model_name_or_path': source_model_id,
'max_len': tokenizer_max_len}
tokenizer = tokenizer = AutoTokenizer.from_pretrained(**tokenizer_config)
tokenizer.pad_token = tokenizer.eos_token
model = PeftModel.from_pretrained(model, peft_model_id)
```
## Inference Example
Load the model to evaluate on the validation set:
```python
def getDataSetFromFiles(df):
# convert dataset
df["text"] = df[["premise", "hypothesis"]].apply(lambda x: " [SEP] ".join(x.astype(str)), axis =1)
df = df.drop(["idx", "premise", "hypothesis"], axis=1)
#df['label'] = df['label'].replace({1:'contradiction', -1:'entailment', 0:'neutral'})
df["label"] = df.label.map({"not_entailment": 0, "entailment": 1})
return Dataset.from_pandas(df)
print("--LOADING EVAL DATAS---")
eval_data = load_dataset("NorGLM/NO-QNLI", data_files="val.jsonl")
eval_data = getDataSetFromFiles(eval_data["train"].to_pandas())
print("--MAKING PREDICTIONS---")
model.eval()
y_true = []
y_pred = []
count = 0
for data in eval_data:
count = count + 1
if count % 100 == 0:
print(count)
inputs = tokenizer(data['text'], return_tensors="pt").to(torch_device)
with torch.no_grad():
logits = model(**inputs).logits
#print(logits)
predicted_class_id = logits.argmax().item()
y_true.append(data['label'])
y_pred.append(predicted_class_id)
print(y_pred)
print(f"Lenght of true_values: {len(y_true)}")
print(f"Lenght of predicted_values: {len(y_pred)}")
y_true = np.array(y_true)
y_pred = np.array(y_pred)
F_score = f1_score(y_true, y_pred, average="macro")
print(f"F1 score: {F_score}")
accuracy = accuracy_score(y_true, y_pred)
print(f"Accuracy: {accuracy}")
```
## Note
More training details will be released soon! |
NorGLM/NorLlama-3B-NO-QNLI-peft | NorGLM | 2024-03-11T00:09:04Z | 0 | 0 | null | [
"no",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2024-03-10T23:55:51Z | ---
license: cc-by-nc-sa-4.0
language:
- 'no'
---
# Model Card
NorLlama-3B-NO-QNLI-peft is trained on top of [NorLlama-3B](https://huggingface.co/NorGLM/NorLlama-3B) model on [NO-QNLI](https://huggingface.co/datasets/NorGLM/NO-QNLI) dataset.
Data format:
```
input: {premise}[SEP]{hypothesis}
label: {entailment, not_entailment} -> {1,0}
```
## Run the Model
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
torch_device = "cuda" if torch.cuda.is_available() else "cpu"
source_model_id = "NorGLM/NorLlama-3B"
peft_model_id = "NorGLM/NorLlama-3B-NO-QNLI-peft"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(source_model_id, device_map='balanced')
tokenizer_max_len = 2048
tokenizer_config = {'pretrained_model_name_or_path': source_model_id,
'max_len': tokenizer_max_len}
tokenizer = tokenizer = AutoTokenizer.from_pretrained(**tokenizer_config)
tokenizer.pad_token = tokenizer.eos_token
model = PeftModel.from_pretrained(model, peft_model_id)
```
## Inference Example
Load the model to evaluate on the validation set:
```python
def getDataSetFromFiles(df):
# convert dataset
df["text"] = df[["premise", "hypothesis"]].apply(lambda x: " [SEP] ".join(x.astype(str)), axis =1)
df = df.drop(["idx", "premise", "hypothesis"], axis=1)
#df['label'] = df['label'].replace({1:'contradiction', -1:'entailment', 0:'neutral'})
df["label"] = df.label.map({"not_entailment": 0, "entailment": 1})
return Dataset.from_pandas(df)
print("--LOADING EVAL DATAS---")
eval_data = load_dataset("NorGLM/NO-QNLI", data_files="val.jsonl")
eval_data = getDataSetFromFiles(eval_data["train"].to_pandas())
print("--MAKING PREDICTIONS---")
model.eval()
y_true = []
y_pred = []
count = 0
for data in eval_data:
count = count + 1
if count % 100 == 0:
print(count)
inputs = tokenizer(data['text'], return_tensors="pt").to(torch_device)
with torch.no_grad():
logits = model(**inputs).logits
#print(logits)
predicted_class_id = logits.argmax().item()
y_true.append(data['label'])
y_pred.append(predicted_class_id)
print(y_pred)
print(f"Lenght of true_values: {len(y_true)}")
print(f"Lenght of predicted_values: {len(y_pred)}")
y_true = np.array(y_true)
y_pred = np.array(y_pred)
F_score = f1_score(y_true, y_pred, average="macro")
print(f"F1 score: {F_score}")
accuracy = accuracy_score(y_true, y_pred)
print(f"Accuracy: {accuracy}")
```
## Note
More training details will be released soon! |
NorGLM/NorGPT-3B-continue-NO-QNLI-peft | NorGLM | 2024-03-11T00:08:47Z | 0 | 0 | null | [
"no",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2024-03-10T23:53:53Z | ---
license: cc-by-nc-sa-4.0
language:
- 'no'
---
# Model Card
NorGPT-3B-continue-NO-QNLI-peft is trained on top of [NorGPT-3B-continue](https://huggingface.co/NorGLM/NorGPT-3B-continue) model on [NO-QNLI](https://huggingface.co/datasets/NorGLM/NO-QNLI) dataset.
Data format:
```
input: {premise}[SEP]{hypothesis}
label: {entailment, not_entailment} -> {1,0}
```
## Run the Model
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
torch_device = "cuda" if torch.cuda.is_available() else "cpu"
source_model_id = "NorGLM/NorGPT-3B-continue"
peft_model_id = "NorGLM/NorGPT-3B-continue-NO-QNLI-peft"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(source_model_id, device_map='balanced')
tokenizer_max_len = 2048
tokenizer_config = {'pretrained_model_name_or_path': source_model_id,
'max_len': tokenizer_max_len}
tokenizer = tokenizer = AutoTokenizer.from_pretrained(**tokenizer_config)
tokenizer.pad_token = tokenizer.eos_token
model = PeftModel.from_pretrained(model, peft_model_id)
```
## Inference Example
Load the model to evaluate on the validation set:
```python
def getDataSetFromFiles(df):
# convert dataset
df["text"] = df[["premise", "hypothesis"]].apply(lambda x: " [SEP] ".join(x.astype(str)), axis =1)
df = df.drop(["idx", "premise", "hypothesis"], axis=1)
#df['label'] = df['label'].replace({1:'contradiction', -1:'entailment', 0:'neutral'})
df["label"] = df.label.map({"not_entailment": 0, "entailment": 1})
return Dataset.from_pandas(df)
print("--LOADING EVAL DATAS---")
eval_data = load_dataset("NorGLM/NO-QNLI", data_files="val.jsonl")
eval_data = getDataSetFromFiles(eval_data["train"].to_pandas())
print("--MAKING PREDICTIONS---")
model.eval()
y_true = []
y_pred = []
count = 0
for data in eval_data:
count = count + 1
if count % 100 == 0:
print(count)
inputs = tokenizer(data['text'], return_tensors="pt").to(torch_device)
with torch.no_grad():
logits = model(**inputs).logits
#print(logits)
predicted_class_id = logits.argmax().item()
y_true.append(data['label'])
y_pred.append(predicted_class_id)
print(y_pred)
print(f"Lenght of true_values: {len(y_true)}")
print(f"Lenght of predicted_values: {len(y_pred)}")
y_true = np.array(y_true)
y_pred = np.array(y_pred)
F_score = f1_score(y_true, y_pred, average="macro")
print(f"F1 score: {F_score}")
accuracy = accuracy_score(y_true, y_pred)
print(f"Accuracy: {accuracy}")
```
## Note
More training details will be released soon! |
NorGLM/NorGPT-369M-NO-QNLI-peft | NorGLM | 2024-03-11T00:08:20Z | 0 | 0 | null | [
"no",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2024-03-10T23:45:24Z | ---
license: cc-by-nc-sa-4.0
language:
- 'no'
---
# Model Card
NorGPT-369M-NO-QNLI-peft is trained on top of [NorGPT-369M](https://huggingface.co/NorGLM/NorGPT-369M) model on [NO-QNLI](https://huggingface.co/datasets/NorGLM/NO-QNLI) dataset.
Data format:
```
input: {premise}[SEP]{hypothesis}
label: {entailment, not_entailment} -> {1,0}
```
## Run the Model
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
torch_device = "cuda" if torch.cuda.is_available() else "cpu"
source_model_id = "NorGLM/NorGPT-369M"
peft_model_id = "NorGLM/NorGPT-369M-NO-QNLI-peft"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(source_model_id, device_map='balanced')
tokenizer_max_len = 2048
tokenizer_config = {'pretrained_model_name_or_path': source_model_id,
'max_len': tokenizer_max_len}
tokenizer = tokenizer = AutoTokenizer.from_pretrained(**tokenizer_config)
tokenizer.pad_token = tokenizer.eos_token
model = PeftModel.from_pretrained(model, peft_model_id)
```
## Inference Example
Load the model to evaluate on the validation set:
```python
def getDataSetFromFiles(df):
# convert dataset
df["text"] = df[["premise", "hypothesis"]].apply(lambda x: " [SEP] ".join(x.astype(str)), axis =1)
df = df.drop(["idx", "premise", "hypothesis"], axis=1)
#df['label'] = df['label'].replace({1:'contradiction', -1:'entailment', 0:'neutral'})
df["label"] = df.label.map({"not_entailment": 0, "entailment": 1})
return Dataset.from_pandas(df)
print("--LOADING EVAL DATAS---")
eval_data = load_dataset("NorGLM/NO-QNLI", data_files="val.jsonl")
eval_data = getDataSetFromFiles(eval_data["train"].to_pandas())
print("--MAKING PREDICTIONS---")
model.eval()
y_true = []
y_pred = []
count = 0
for data in eval_data:
count = count + 1
if count % 100 == 0:
print(count)
inputs = tokenizer(data['text'], return_tensors="pt").to(torch_device)
with torch.no_grad():
logits = model(**inputs).logits
#print(logits)
predicted_class_id = logits.argmax().item()
y_true.append(data['label'])
y_pred.append(predicted_class_id)
print(y_pred)
print(f"Lenght of true_values: {len(y_true)}")
print(f"Lenght of predicted_values: {len(y_pred)}")
y_true = np.array(y_true)
y_pred = np.array(y_pred)
F_score = f1_score(y_true, y_pred, average="macro")
print(f"F1 score: {F_score}")
accuracy = accuracy_score(y_true, y_pred)
print(f"Accuracy: {accuracy}")
```
## Note
More training details will be released soon! |
Davada/subnet6 | Davada | 2024-03-11T00:08:11Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T23:32:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
NorGLM/NorGPT-369M-NO-MRPC-peft | NorGLM | 2024-03-11T00:07:22Z | 0 | 0 | null | [
"no",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2024-03-11T00:02:01Z | ---
license: cc-by-nc-sa-4.0
language:
- 'no'
---
# Model Card
NorGPT-369M-NO-MRPC-peft is trained on top of [NorGPT-369M](https://huggingface.co/NorGLM/NorGPT-369M) model on [NO-MRPC](https://huggingface.co/datasets/NorGLM/NO-MRPC) dataset.
Data format:
```
input: {text_a}[SEP]{text_b}
label: {0, 1}
```
## Run the Model
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
torch_device = "cuda" if torch.cuda.is_available() else "cpu"
source_model_id = "NorGLM/NorGPT-369M"
peft_model_id = "NorGLM/NorGPT-369M-NO-MRPC-peft"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(source_model_id, device_map='balanced')
tokenizer_max_len = 2048
tokenizer_config = {'pretrained_model_name_or_path': source_model_id,
'max_len': tokenizer_max_len}
tokenizer = tokenizer = AutoTokenizer.from_pretrained(**tokenizer_config)
tokenizer.pad_token = tokenizer.eos_token
model = PeftModel.from_pretrained(model, peft_model_id)
```
## Inference Example
Load the model to evaluate on the validation set:
```python
def getDataSetFromFiles(df):
# convert dataset
df["text"] = df[["text_a", "text_b"]].apply(lambda x: " [SEP] ".join(x.astype(str)), axis =1)
df = df.drop(["idx", "text_a", "text_b"], axis=1)
df["label"] = df.label.map({0: 0, 1: 1})
return Dataset.from_pandas(df)
print("--LOADING EVAL DATAS---")
eval_data = load_dataset("NorGLM/NO-MRPC", data_files="val.jsonl")
eval_data = getDataSetFromFiles(eval_data["train"].to_pandas())
print("--MAKING PREDICTIONS---")
model.eval()
y_true = []
y_pred = []
count = 0
for data in eval_data:
count = count + 1
if count % 100 == 0:
print(count)
inputs = tokenizer(data['text'], return_tensors="pt").to(torch_device)
with torch.no_grad():
logits = model(**inputs).logits
#print(logits)
predicted_class_id = logits.argmax().item()
y_true.append(data['label'])
y_pred.append(predicted_class_id)
print(y_pred)
print(f"Lenght of true_values: {len(y_true)}")
print(f"Lenght of predicted_values: {len(y_pred)}")
y_true = np.array(y_true)
y_pred = np.array(y_pred)
F_score = f1_score(y_true, y_pred, average="macro")
print(f"F1 score: {F_score}")
accuracy = accuracy_score(y_true, y_pred)
print(f"Accuracy: {accuracy}")
```
## Note
More training details will be released soon! |
Holarissun/phi2-aisft-synhh-randsampler-subset30000 | Holarissun | 2024-03-10T23:59:25Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-03-10T23:59:14Z | ---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: microsoft/phi-2
model-index:
- name: phi2-aisft-synhh-randsampler-subset30000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi2-aisft-synhh-randsampler-subset30000
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
nold/MediKAI-GGUF | nold | 2024-03-10T23:58:10Z | 20 | 0 | null | [
"gguf",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-03-10T15:35:00Z | ---
license: other
---
# MediKAI - Your Healthcare Companion 🏥💬
Welcome to mediKAI, the latest healthcare-focused model by HelpingAI designed to provide personalized assistance and support in medical-related queries.
## Overview
mediKAI is a 14 billion parameters model that specializes in healthcare-related topics and medical assistance. Whether you have questions about symptoms, treatments, medications, or general health and wellness, mediKAI is here to help.
## Languages Supported
- English
- French
- Hindi
- Spanish
- Arabic
```
***
Quantization of Model [OEvortex/MediKAI](https://huggingface.co/OEvortex/MediKAI).
Created using [llm-quantizer](https://github.com/Nold360/llm-quantizer) Pipeline
|
dzakwan/cybersec | dzakwan | 2024-03-10T23:56:53Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gemma",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T15:53:39Z | ---
library_name: transformers
widget:
- messages:
- role: user
content: >-
We need to prepare for the possibility of a security incident. Can you
create an incident response plan for our organization?
inference:
parameters:
max_new_tokens: 200
tags:
- unsloth
- trl
- sft
language:
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** M Dzakwan Falih
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** English
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hanzla/gemma-2b-it-ds | hanzla | 2024-03-10T23:49:53Z | 23 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T18:33:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
koesn/Saul-Instruct-v1-GGUF | koesn | 2024-03-10T23:48:42Z | 56 | 5 | transformers | [
"transformers",
"gguf",
"legal",
"en",
"arxiv:2403.03883",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-03-10T16:19:55Z | ---
library_name: transformers
tags:
- legal
license: mit
language:
- en
---
# Saul-Instruct-v1
## Description
This repo contains GGUF format model files for Saul-Instruct-v1.
## Files Provided
| Name | Quant | Bits | File Size | Remark |
| ---------------------------- | ------- | ---- | --------- | -------------------------------- |
| saul-instruct-v1.IQ3_S.gguf | IQ3_S | 3 | 3.18 GB | 3.44 bpw quantization |
| saul-instruct-v1.IQ3_M.gguf | IQ3_M | 3 | 3.28 GB | 3.66 bpw quantization mix |
| saul-instruct-v1.Q4_0.gguf | Q4_0 | 4 | 4.11 GB | 3.56G, +0.2166 ppl |
| saul-instruct-v1.IQ4_NL.gguf | IQ4_NL | 4 | 4.16 GB | 4.25 bpw non-linear quantization |
| saul-instruct-v1.Q4_K_M.gguf | Q4_K_M | 4 | 4.37 GB | 3.80G, +0.0532 ppl |
| saul-instruct-v1.Q5_K_M.gguf | Q5_K_M | 5 | 5.13 GB | 4.45G, +0.0122 ppl |
| saul-instruct-v1.Q6_K.gguf | Q6_K | 6 | 5.94 GB | 5.15G, +0.0008 ppl |
| saul-instruct-v1.Q8_0.gguf | Q8_0 | 8 | 7.70 GB | 6.70G, +0.0004 ppl |
## Parameters
| path | type | architecture | rope_theta | sliding_win | max_pos_embed |
| ----------------------- | ------- | ------------------ | ---------- | ----------- | ------------- |
| Equall/Saul-Instruct-v1 | mistral | MistralForCausalLM | 10000 | 4096 | 32768 |
## Benchmarks
See original model card.
# Original Model Card
# Equall/Saul-Instruct-v1
This is the instruct model for Equall/Saul-Instruct-v1, a large instruct language model tailored for Legal domain. This model is obtained by continue pretraining of Mistral-7B.
Checkout our website and register https://equall.ai/

## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Equall.ai in collaboration with CentraleSupelec, Sorbonne Université, Instituto Superior Técnico and NOVA School of Law
- **Model type:** 7B
- **Language(s) (NLP):** English
- **License:** MIT
### Model Sources
<!-- Provide the basic links for the model. -->
- **Paper:** https://arxiv.org/abs/2403.03883
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
You can use it for legal use cases that involves generation.
Here's how you can run the model using the pipeline() function from 🤗 Transformers:
```python
# Install transformers from source - only needed for versions <= v4.34
# pip install git+https://github.com/huggingface/transformers.git
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="Equall/Saul-Instruct-v1", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer’s chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{"role": "user", "content": "[YOUR QUERY GOES HERE]"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=False)
print(outputs[0]["generated_text"])
```
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
This model is built upon the technology of LLM, which comes with inherent limitations. It may occasionally generate inaccurate or nonsensical outputs. Furthermore, being a 7B model, it's anticipated to exhibit less robust performance compared to larger models, such as the 70B variant.
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```bibtex
@misc{colombo2024saullm7b,
title={SaulLM-7B: A pioneering Large Language Model for Law},
author={Pierre Colombo and Telmo Pessoa Pires and Malik Boudiaf and Dominic Culver and Rui Melo and Caio Corro and Andre F. T. Martins and Fabrizio Esposito and Vera Lúcia Raposo and Sofia Morgado and Michael Desa},
year={2024},
eprint={2403.03883},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
ZainAli60/miner_ | ZainAli60 | 2024-03-10T23:43:29Z | 189 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T20:49:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
btmiller/output | btmiller | 2024-03-10T23:37:44Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:google/flan-t5-small",
"base_model:adapter:google/flan-t5-small",
"license:apache-2.0",
"region:us"
] | null | 2024-03-10T23:37:43Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: google/flan-t5-small
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.9.1.dev0
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
USGOV/Gov | USGOV | 2024-03-10T23:30:04Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2024-03-10T23:30:04Z | ---
license: other
license_name: government-open-source
license_link: LICENSE
---
|
alinerodrigues/wav2vec2-large-xlsr-mecita-coraa-portuguese-all-grade-2-5 | alinerodrigues | 2024-03-10T23:25:48Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-03-10T19:49:47Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-large-xlsr-mecita-coraa-portuguese-all-grade-2-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-mecita-coraa-portuguese-all-grade-2-5
This model is a fine-tuned version of [Edresson/wav2vec2-large-xlsr-coraa-portuguese](https://huggingface.co/Edresson/wav2vec2-large-xlsr-coraa-portuguese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1888
- Wer: 0.1049
- Cer: 0.0343
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 31.7094 | 1.0 | 58 | 3.6328 | 1.0 | 1.0 |
| 7.717 | 2.0 | 116 | 3.1503 | 1.0 | 1.0 |
| 7.717 | 3.0 | 174 | 3.0149 | 1.0 | 1.0 |
| 3.0503 | 4.0 | 232 | 2.9605 | 1.0 | 1.0 |
| 3.0503 | 5.0 | 290 | 2.9212 | 1.0 | 1.0 |
| 2.9265 | 6.0 | 348 | 2.8995 | 1.0 | 1.0 |
| 2.8799 | 7.0 | 406 | 2.6417 | 1.0 | 1.0 |
| 2.8799 | 8.0 | 464 | 1.3314 | 0.9950 | 0.2921 |
| 2.0484 | 9.0 | 522 | 0.7075 | 0.3918 | 0.1000 |
| 2.0484 | 10.0 | 580 | 0.5138 | 0.2276 | 0.0664 |
| 0.9682 | 11.0 | 638 | 0.4169 | 0.2071 | 0.0596 |
| 0.9682 | 12.0 | 696 | 0.3580 | 0.1835 | 0.0530 |
| 0.6198 | 13.0 | 754 | 0.3281 | 0.1719 | 0.0513 |
| 0.529 | 14.0 | 812 | 0.3166 | 0.1692 | 0.0502 |
| 0.529 | 15.0 | 870 | 0.2954 | 0.1595 | 0.0483 |
| 0.445 | 16.0 | 928 | 0.2783 | 0.1502 | 0.0453 |
| 0.445 | 17.0 | 986 | 0.2721 | 0.1452 | 0.0445 |
| 0.3943 | 18.0 | 1044 | 0.2537 | 0.1390 | 0.0415 |
| 0.3798 | 19.0 | 1102 | 0.2567 | 0.1332 | 0.0416 |
| 0.3798 | 20.0 | 1160 | 0.2434 | 0.1196 | 0.0388 |
| 0.3459 | 21.0 | 1218 | 0.2421 | 0.1181 | 0.0384 |
| 0.3459 | 22.0 | 1276 | 0.2252 | 0.1150 | 0.0365 |
| 0.3187 | 23.0 | 1334 | 0.2331 | 0.1146 | 0.0368 |
| 0.3187 | 24.0 | 1392 | 0.2195 | 0.1181 | 0.0371 |
| 0.2982 | 25.0 | 1450 | 0.2180 | 0.1181 | 0.0375 |
| 0.2874 | 26.0 | 1508 | 0.2181 | 0.1069 | 0.0355 |
| 0.2874 | 27.0 | 1566 | 0.2159 | 0.1099 | 0.0360 |
| 0.2542 | 28.0 | 1624 | 0.2173 | 0.1161 | 0.0380 |
| 0.2542 | 29.0 | 1682 | 0.2127 | 0.1080 | 0.0358 |
| 0.2663 | 30.0 | 1740 | 0.2112 | 0.1158 | 0.0372 |
| 0.2663 | 31.0 | 1798 | 0.2114 | 0.1130 | 0.0364 |
| 0.2371 | 32.0 | 1856 | 0.2052 | 0.1092 | 0.0359 |
| 0.2348 | 33.0 | 1914 | 0.2044 | 0.1061 | 0.0346 |
| 0.2348 | 34.0 | 1972 | 0.2067 | 0.1072 | 0.0344 |
| 0.2368 | 35.0 | 2030 | 0.2023 | 0.1099 | 0.0350 |
| 0.2368 | 36.0 | 2088 | 0.1992 | 0.1049 | 0.0353 |
| 0.217 | 37.0 | 2146 | 0.1972 | 0.1076 | 0.0354 |
| 0.234 | 38.0 | 2204 | 0.1938 | 0.1076 | 0.0347 |
| 0.234 | 39.0 | 2262 | 0.1982 | 0.1069 | 0.0348 |
| 0.1979 | 40.0 | 2320 | 0.1945 | 0.1061 | 0.0346 |
| 0.1979 | 41.0 | 2378 | 0.2003 | 0.1069 | 0.0353 |
| 0.2062 | 42.0 | 2436 | 0.1970 | 0.1053 | 0.0350 |
| 0.2062 | 43.0 | 2494 | 0.1984 | 0.1007 | 0.0341 |
| 0.2011 | 44.0 | 2552 | 0.1992 | 0.1072 | 0.0343 |
| 0.1807 | 45.0 | 2610 | 0.1962 | 0.1084 | 0.0342 |
| 0.1807 | 46.0 | 2668 | 0.1958 | 0.1030 | 0.0334 |
| 0.1982 | 47.0 | 2726 | 0.1928 | 0.1038 | 0.0340 |
| 0.1982 | 48.0 | 2784 | 0.1961 | 0.1053 | 0.0344 |
| 0.1948 | 49.0 | 2842 | 0.1939 | 0.1049 | 0.0336 |
| 0.1777 | 50.0 | 2900 | 0.1888 | 0.1049 | 0.0343 |
| 0.1777 | 51.0 | 2958 | 0.1930 | 0.1026 | 0.0336 |
| 0.1655 | 52.0 | 3016 | 0.1900 | 0.1018 | 0.0333 |
| 0.1655 | 53.0 | 3074 | 0.1950 | 0.1034 | 0.0331 |
| 0.1805 | 54.0 | 3132 | 0.1946 | 0.1045 | 0.0340 |
| 0.1805 | 55.0 | 3190 | 0.1959 | 0.1030 | 0.0337 |
| 0.1829 | 56.0 | 3248 | 0.1933 | 0.0987 | 0.0325 |
| 0.1621 | 57.0 | 3306 | 0.1908 | 0.0976 | 0.0325 |
| 0.1621 | 58.0 | 3364 | 0.1892 | 0.1010 | 0.0331 |
| 0.1702 | 59.0 | 3422 | 0.1907 | 0.0995 | 0.0322 |
| 0.1702 | 60.0 | 3480 | 0.1934 | 0.1003 | 0.0326 |
| 0.1652 | 61.0 | 3538 | 0.1959 | 0.0987 | 0.0328 |
| 0.1652 | 62.0 | 3596 | 0.1961 | 0.0976 | 0.0323 |
| 0.1567 | 63.0 | 3654 | 0.1927 | 0.0991 | 0.0330 |
| 0.1496 | 64.0 | 3712 | 0.1912 | 0.0983 | 0.0327 |
| 0.1496 | 65.0 | 3770 | 0.1963 | 0.1007 | 0.0330 |
| 0.1672 | 66.0 | 3828 | 0.1958 | 0.0999 | 0.0328 |
| 0.1672 | 67.0 | 3886 | 0.1962 | 0.0987 | 0.0328 |
| 0.141 | 68.0 | 3944 | 0.1957 | 0.0964 | 0.0320 |
| 0.144 | 69.0 | 4002 | 0.1942 | 0.0949 | 0.0316 |
| 0.144 | 70.0 | 4060 | 0.1931 | 0.0995 | 0.0331 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.2.1+cu121
- Datasets 2.17.0
- Tokenizers 0.13.3
|
hamzasidat/Hamzas_Bert_Irony3 | hamzasidat | 2024-03-10T23:09:59Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-10T23:09:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hamzasidat/BertIronyResults3 | hamzasidat | 2024-03-10T23:09:55Z | 179 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-10T23:09:12Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: BertIronyResults3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BertIronyResults3
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5868
- Accuracy: 0.6932
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 179 | 0.5868 | 0.6932 |
| No log | 2.0 | 358 | 0.6104 | 0.6869 |
| 0.4907 | 3.0 | 537 | 0.6448 | 0.7026 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
hamzasidat/Hamzas_assignment1_Albert2 | hamzasidat | 2024-03-10T23:05:49Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-10T23:05:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Francois2511/distilbert-base-uncased-finetuned-emotion | Francois2511 | 2024-03-10T23:03:13Z | 94 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-10T22:26:32Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9215
- name: F1
type: f1
value: 0.921552692432107
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2165
- Accuracy: 0.9215
- F1: 0.9216
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8067 | 1.0 | 250 | 0.3244 | 0.902 | 0.9008 |
| 0.2493 | 2.0 | 500 | 0.2165 | 0.9215 | 0.9216 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
saintazunya/outputs-dreambooth-sdxl-kanade | saintazunya | 2024-03-10T23:03:06Z | 2 | 2 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-03-10T22:17:56Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of skskanadetachibana figure
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - saintazunya/outputs-dreambooth-sdxl-kanade
<Gallery />
## Model description
These are saintazunya/outputs-dreambooth-sdxl-kanade LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of skskanadetachibana figure to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](saintazunya/outputs-dreambooth-sdxl-kanade/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
keskin-oguzhan/phi2-squadv2 | keskin-oguzhan | 2024-03-10T23:02:58Z | 39 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T22:58:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hamzasidat/Hamzas_assignment1_Bert2 | hamzasidat | 2024-03-10T23:02:02Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-10T23:02:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hamzasidat/BertResults2 | hamzasidat | 2024-03-10T23:02:00Z | 177 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-10T23:01:39Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: BertResults2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.94
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BertResults2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1487
- Accuracy: 0.94
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2236 | 1.0 | 1000 | 0.1929 | 0.924 |
| 0.1179 | 2.0 | 2000 | 0.1487 | 0.94 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
tsavage68/mistralit2_1000_STEPS_1e8_rate_03_beta_DPO | tsavage68 | 2024-03-10T22:55:57Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T22:49:41Z | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.2
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: mistralit2_1000_STEPS_1e8_rate_03_beta_DPO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistralit2_1000_STEPS_1e8_rate_03_beta_DPO
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6902
- Rewards/chosen: -0.0213
- Rewards/rejected: -0.0282
- Rewards/accuracies: 0.4945
- Rewards/margins: 0.0070
- Logps/rejected: -28.6665
- Logps/chosen: -23.4567
- Logits/rejected: -2.8649
- Logits/chosen: -2.8651
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-08
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6911 | 0.1 | 50 | 0.6909 | 0.0027 | -0.0025 | 0.4967 | 0.0052 | -28.5807 | -23.3768 | -2.8653 | -2.8655 |
| 0.6916 | 0.2 | 100 | 0.6928 | -0.0010 | -0.0023 | 0.4571 | 0.0014 | -28.5802 | -23.3891 | -2.8653 | -2.8655 |
| 0.6924 | 0.29 | 150 | 0.6922 | -0.0091 | -0.0117 | 0.4681 | 0.0026 | -28.6115 | -23.4162 | -2.8651 | -2.8654 |
| 0.6941 | 0.39 | 200 | 0.6914 | -0.0066 | -0.0109 | 0.4879 | 0.0043 | -28.6088 | -23.4078 | -2.8652 | -2.8654 |
| 0.6942 | 0.49 | 250 | 0.6911 | -0.0070 | -0.0120 | 0.4791 | 0.0050 | -28.6123 | -23.4090 | -2.8649 | -2.8652 |
| 0.6909 | 0.59 | 300 | 0.6921 | -0.0151 | -0.0181 | 0.4593 | 0.0030 | -28.6327 | -23.4362 | -2.8650 | -2.8653 |
| 0.696 | 0.68 | 350 | 0.6903 | -0.0140 | -0.0207 | 0.5121 | 0.0067 | -28.6414 | -23.4326 | -2.8651 | -2.8653 |
| 0.6907 | 0.78 | 400 | 0.6904 | -0.0153 | -0.0217 | 0.4945 | 0.0064 | -28.6448 | -23.4369 | -2.8649 | -2.8652 |
| 0.6895 | 0.88 | 450 | 0.6898 | -0.0157 | -0.0232 | 0.4945 | 0.0075 | -28.6497 | -23.4380 | -2.8649 | -2.8652 |
| 0.6902 | 0.98 | 500 | 0.6892 | -0.0192 | -0.0282 | 0.5165 | 0.0090 | -28.6665 | -23.4500 | -2.8650 | -2.8652 |
| 0.6923 | 1.07 | 550 | 0.6893 | -0.0196 | -0.0282 | 0.5385 | 0.0086 | -28.6663 | -23.4511 | -2.8649 | -2.8652 |
| 0.6957 | 1.17 | 600 | 0.6897 | -0.0210 | -0.0288 | 0.5011 | 0.0078 | -28.6684 | -23.4560 | -2.8649 | -2.8652 |
| 0.6885 | 1.27 | 650 | 0.6897 | -0.0173 | -0.0251 | 0.5143 | 0.0078 | -28.6560 | -23.4436 | -2.8650 | -2.8653 |
| 0.6912 | 1.37 | 700 | 0.6906 | -0.0207 | -0.0268 | 0.4967 | 0.0061 | -28.6617 | -23.4548 | -2.8650 | -2.8652 |
| 0.6874 | 1.46 | 750 | 0.6903 | -0.0216 | -0.0282 | 0.4923 | 0.0065 | -28.6663 | -23.4580 | -2.8650 | -2.8652 |
| 0.6896 | 1.56 | 800 | 0.6877 | -0.0180 | -0.0298 | 0.5451 | 0.0119 | -28.6719 | -23.4457 | -2.8649 | -2.8651 |
| 0.6904 | 1.66 | 850 | 0.6905 | -0.0217 | -0.0279 | 0.4791 | 0.0062 | -28.6655 | -23.4582 | -2.8649 | -2.8651 |
| 0.6913 | 1.76 | 900 | 0.6902 | -0.0213 | -0.0282 | 0.4945 | 0.0070 | -28.6665 | -23.4567 | -2.8649 | -2.8651 |
| 0.6977 | 1.86 | 950 | 0.6902 | -0.0213 | -0.0282 | 0.4945 | 0.0070 | -28.6665 | -23.4567 | -2.8649 | -2.8651 |
| 0.6892 | 1.95 | 1000 | 0.6902 | -0.0213 | -0.0282 | 0.4945 | 0.0070 | -28.6665 | -23.4567 | -2.8649 | -2.8651 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.0.0+cu117
- Datasets 2.18.0
- Tokenizers 0.15.2
|
arsruts/distilbert-base-uncased-finetuned-cola | arsruts | 2024-03-10T22:54:08Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-08T13:37:09Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8855
- Matthews Correlation: 0.5339
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5155 | 1.0 | 535 | 0.4625 | 0.4354 |
| 0.3412 | 2.0 | 1070 | 0.4636 | 0.5212 |
| 0.2297 | 3.0 | 1605 | 0.6616 | 0.5111 |
| 0.1737 | 4.0 | 2140 | 0.8490 | 0.5265 |
| 0.1228 | 5.0 | 2675 | 0.8855 | 0.5339 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
SjardiWillems/distilbert-base-uncased-finetuned-stsb | SjardiWillems | 2024-03-10T22:47:48Z | 23 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:SjardiWillems/distilbert-base-uncased-finetuned-stsb",
"base_model:finetune:SjardiWillems/distilbert-base-uncased-finetuned-stsb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-05T21:52:02Z | ---
license: apache-2.0
base_model: SjardiWillems/distilbert-base-uncased-finetuned-stsb
tags:
- generated_from_trainer
metrics:
- spearmanr
model-index:
- name: distilbert-base-uncased-finetuned-stsb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-stsb
This model is a fine-tuned version of [SjardiWillems/distilbert-base-uncased-finetuned-stsb](https://huggingface.co/SjardiWillems/distilbert-base-uncased-finetuned-stsb) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5419
- Pearson: 0.8736
- Spearmanr: 0.8702
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3.1992432473500055e-06
- train_batch_size: 64
- eval_batch_size: 16
- seed: 25
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|
| No log | 1.0 | 90 | 0.5404 | 0.8727 | 0.8690 |
| No log | 2.0 | 180 | 0.5394 | 0.8736 | 0.8701 |
| No log | 3.0 | 270 | 0.5394 | 0.8738 | 0.8703 |
| No log | 4.0 | 360 | 0.5419 | 0.8736 | 0.8702 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Litzy619/V0309P6 | Litzy619 | 2024-03-10T22:45:47Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-03-10T07:39:51Z | ---
license: mit
base_model: microsoft/phi-2
tags:
- generated_from_trainer
model-index:
- name: V0309P6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0309P6
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0648
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 20
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.969 | 0.09 | 10 | 0.5527 |
| 0.2118 | 0.17 | 20 | 0.0895 |
| 0.1076 | 0.26 | 30 | 0.0750 |
| 0.0998 | 0.34 | 40 | 0.0690 |
| 0.0936 | 0.43 | 50 | 0.0643 |
| 0.0846 | 0.51 | 60 | 0.0642 |
| 0.0784 | 0.6 | 70 | 0.0639 |
| 0.0857 | 0.68 | 80 | 0.0668 |
| 0.0748 | 0.77 | 90 | 0.0641 |
| 0.111 | 0.85 | 100 | 0.0680 |
| 0.0874 | 0.94 | 110 | 0.0704 |
| 0.0842 | 1.02 | 120 | 0.0675 |
| 0.0797 | 1.11 | 130 | 0.0678 |
| 0.0731 | 1.19 | 140 | 0.0642 |
| 0.0714 | 1.28 | 150 | 0.0584 |
| 0.0709 | 1.37 | 160 | 0.0621 |
| 0.0703 | 1.45 | 170 | 0.0587 |
| 0.0638 | 1.54 | 180 | 0.0595 |
| 0.0678 | 1.62 | 190 | 0.0580 |
| 0.067 | 1.71 | 200 | 0.0600 |
| 0.0672 | 1.79 | 210 | 0.0604 |
| 0.0627 | 1.88 | 220 | 0.0640 |
| 0.0587 | 1.96 | 230 | 0.0592 |
| 0.057 | 2.05 | 240 | 0.0622 |
| 0.0486 | 2.13 | 250 | 0.0663 |
| 0.0484 | 2.22 | 260 | 0.0690 |
| 0.0457 | 2.3 | 270 | 0.0677 |
| 0.0529 | 2.39 | 280 | 0.0636 |
| 0.0533 | 2.47 | 290 | 0.0622 |
| 0.0523 | 2.56 | 300 | 0.0627 |
| 0.0523 | 2.65 | 310 | 0.0638 |
| 0.0456 | 2.73 | 320 | 0.0642 |
| 0.048 | 2.82 | 330 | 0.0648 |
| 0.0454 | 2.9 | 340 | 0.0642 |
| 0.0491 | 2.99 | 350 | 0.0648 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Litzy619/V0309P4 | Litzy619 | 2024-03-10T22:45:17Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-03-10T07:37:44Z | ---
license: mit
base_model: microsoft/phi-2
tags:
- generated_from_trainer
model-index:
- name: V0309P4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0309P4
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0689
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 20
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1886 | 0.09 | 10 | 0.9747 |
| 0.3651 | 0.17 | 20 | 0.0977 |
| 0.1129 | 0.26 | 30 | 0.0765 |
| 0.0955 | 0.34 | 40 | 0.0707 |
| 0.0894 | 0.43 | 50 | 0.0684 |
| 0.083 | 0.51 | 60 | 0.0679 |
| 0.0762 | 0.6 | 70 | 0.0688 |
| 0.0807 | 0.68 | 80 | 0.0672 |
| 0.0699 | 0.77 | 90 | 0.0735 |
| 0.0699 | 0.85 | 100 | 0.0735 |
| 0.0757 | 0.94 | 110 | 0.0663 |
| 0.0726 | 1.02 | 120 | 0.0632 |
| 0.0641 | 1.11 | 130 | 0.0692 |
| 0.0627 | 1.19 | 140 | 0.0625 |
| 0.0579 | 1.28 | 150 | 0.0625 |
| 0.0579 | 1.37 | 160 | 0.0682 |
| 0.0564 | 1.45 | 170 | 0.0642 |
| 0.0544 | 1.54 | 180 | 0.0651 |
| 0.0565 | 1.62 | 190 | 0.0623 |
| 0.057 | 1.71 | 200 | 0.0605 |
| 0.0589 | 1.79 | 210 | 0.0602 |
| 0.0538 | 1.88 | 220 | 0.0659 |
| 0.0528 | 1.96 | 230 | 0.0623 |
| 0.0482 | 2.05 | 240 | 0.0640 |
| 0.0396 | 2.13 | 250 | 0.0693 |
| 0.0398 | 2.22 | 260 | 0.0753 |
| 0.0372 | 2.3 | 270 | 0.0771 |
| 0.0463 | 2.39 | 280 | 0.0707 |
| 0.0447 | 2.47 | 290 | 0.0676 |
| 0.0429 | 2.56 | 300 | 0.0672 |
| 0.0454 | 2.65 | 310 | 0.0670 |
| 0.0377 | 2.73 | 320 | 0.0678 |
| 0.0387 | 2.82 | 330 | 0.0690 |
| 0.0394 | 2.9 | 340 | 0.0690 |
| 0.0414 | 2.99 | 350 | 0.0689 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
NorGLM/NorLlama-3B-Instruction-peft | NorGLM | 2024-03-10T22:42:28Z | 0 | 0 | null | [
"no",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2024-03-10T22:40:42Z | ---
license: cc-by-nc-sa-4.0
language:
- 'no'
---
# Model Card
NorLlama-3B-Instruction-peft is trained on top of [NorLlama-3B](https://huggingface.co/NorGLM/NorLlama-3B) model on [NO-Alpaca](https://huggingface.co/datasets/NbAiLab/norwegian-alpaca) dataset.
Prompt format:
```
{instruction} {input} : {output}
```
Inference prompt:
```
{instruction} {input} :
```
## Run the Model
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
source_model_id = "NorGLM/NorLlama-3B"
peft_model_id = "NorGLM/NorLlama-3B-Instruction-peft"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(source_model_id, device_map='balanced')
tokenizer_max_len = 2048
tokenizer_config = {'pretrained_model_name_or_path': source_model_id,
'max_len': tokenizer_max_len}
tokenizer = tokenizer = AutoTokenizer.from_pretrained(**tokenizer_config)
tokenizer.pad_token = tokenizer.eos_token
model = PeftModel.from_pretrained(model, peft_model_id)
```
## Inference Example
Load the model to evaluate on the last 20\% of NO-Alpaca dataset:
```python
def merge_columns(example):
if str(example["input"]) == "":
example["text"] = str(example["instruction"]) + " : "
else:
example["text"] = str(example["instruction"]) + " " + str(example["input"]) + " : "
return example
def generate_text(text, max_length=200, do_sample=True, top_p = 0.92, top_k=0):
set_seed(42)
model_inputs = tokenizer(text, return_tensors='pt').to(torch_device)
output = model.generate(**model_inputs, max_new_tokens = max_length, no_repeat_ngram_size=2, pad_token_id=tokenizer.eos_token_id)
return tokenizer.decode(output[0], skip_special_tokens=True)
print("--LOADING EVAL DATAS---")
eval_data = load_dataset("NbAiLab/norwegian-alpaca", split='train[-20%:]')
print("--MAKING PREDICTIONS---")
model.eval()
output_file = <output file name>
with open(output_file, 'w', encoding='utf-8-sig') as file:
generated_text = []
for question in eval_data['text']:
generated_text.append({"generated_text": generate_text(question)})
print({"text_generated": len(generated_text)})
json_lines = [json.dumps(data) for data in generated_text]
json_data = "\n".join(json_lines)
file.write(json_data)
```
## Note
More training details will be released soon! |
ThuyNT03/CS505_MvPCOQE_viT5_Prompting5_top1 | ThuyNT03 | 2024-03-10T22:41:01Z | 93 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"base_model:finetune:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-03-10T16:30:59Z | ---
license: mit
base_model: VietAI/vit5-large
tags:
- generated_from_trainer
model-index:
- name: CS505_MvPCOQE_viT5_Prompting5_top1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505_MvPCOQE_viT5_Prompting5_top1
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.2
|
Jackline/Blip2-HateSpeech-PEFT-LLM-2.7b | Jackline | 2024-03-10T22:37:03Z | 3 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:Salesforce/blip2-opt-2.7b",
"base_model:adapter:Salesforce/blip2-opt-2.7b",
"region:us"
] | null | 2024-03-10T20:32:22Z | ---
library_name: peft
base_model: Salesforce/blip2-opt-2.7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.1
|
hamzasidat/DistilbertIronyResults3 | hamzasidat | 2024-03-10T22:33:08Z | 176 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-10T22:32:41Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: DistilbertIronyResults3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilbertIronyResults3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6026
- Accuracy: 0.6806
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 179 | 0.6294 | 0.6147 |
| No log | 2.0 | 358 | 0.6026 | 0.6806 |
| 0.5319 | 3.0 | 537 | 0.6334 | 0.6817 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
EarthnDusk/Lora_Extractions | EarthnDusk | 2024-03-10T22:31:59Z | 0 | 1 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-03-06T04:21:26Z | ---
license: creativeml-openrail-m
---
Lora extractions via Bmaltais/Kohya SS
---
These are extractions of models we have existing, feel free to mooch there should be no activation tag.
these are 128x128 dim/alpha for 1.5 - but SplatterpunkALpha is SDXL and is 32/16.
Feel free WITH CREDIT if possible to merge back into your own content.
SD 1.5 versions DIDNT TURN OUT, unless we tested them wrong.
Splatterpunk is an XL one. |
TeeZee/GALAXY-XB-v.02 | TeeZee | 2024-03-10T22:25:42Z | 54 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-09T22:19:44Z | ---
license: apache-2.0
model-index:
- name: GALAXY-XB-v.02
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 60.67
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/GALAXY-XB-v.02
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 83.27
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/GALAXY-XB-v.02
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.99
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/GALAXY-XB-v.02
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 43.6
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/GALAXY-XB-v.02
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 80.27
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/GALAXY-XB-v.02
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 42.08
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/GALAXY-XB-v.02
name: Open LLM Leaderboard
---
### TeeZee/GALAXY-XB-v.02 ###
Experiment, can DUS be taken one or more steps further?
### Technical notes:
- 10 layers removed from both models this time, 2 more than in original paper.
- base version of upstage/SOLAR-10.7B-v1.0 used for merge
- no finetuning done yet, this is just a merge, first step in DUS paper
- next step, if evaluation proves that its at least as 'smart' as base model, should be finetuning to 'recover' after merge
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TeeZee__GALAXY-XB-v.02)
| Metric |Value|
|---------------------------------|----:|
|Avg. |62.48|
|AI2 Reasoning Challenge (25-Shot)|60.67|
|HellaSwag (10-Shot) |83.27|
|MMLU (5-Shot) |64.99|
|TruthfulQA (0-shot) |43.60|
|Winogrande (5-shot) |80.27|
|GSM8k (5-shot) |42.08|
|
dcarpintero/digit-classifier | dcarpintero | 2024-03-10T22:24:35Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-03-08T17:54:00Z | ---
license: apache-2.0
---
Implemented as a **Multi-Layer Perceptron to classify handwritten Digits (0-9)**
[[Annotated Notebook](https://github.com/dcarpintero/fastai-deeplearning/blob/main/course2024/lesson_03.full.mnist.mlp.md)]
**Model Architecture and Results**
The model comprises a flattening layer and three linear layers `((256, 64) hidden dimensions)` with relus to approximate non-linearity. It achieves 95.6% accuracy after `15 training epochs` and `batch size = 64`. Taining and Test MNIST datasets are loaded with PyTorch [dataloaders](https://pytorch.org/tutorials/beginner/basics/data_tutorial.html).
|
TeeZee/GALAXY-XB-v.01 | TeeZee | 2024-03-10T22:24:21Z | 57 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-09T18:48:51Z | ---
license: apache-2.0
model-index:
- name: GALAXY-XB-v.01
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 60.92
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/GALAXY-XB-v.01
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 82.92
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/GALAXY-XB-v.01
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.11
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/GALAXY-XB-v.01
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 43.67
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/GALAXY-XB-v.01
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.14
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/GALAXY-XB-v.01
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 43.44
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/GALAXY-XB-v.01
name: Open LLM Leaderboard
---
### TeeZee/GALAXY-XB-v.01 ###
Experiment, can DUS be taken one or more steps further?
### Technical notes:
- 8 layers removed from both models, as per original paper.
- base version of upstage/SOLAR-10.7B-v1.0 used for merge
- no finetuning done yet, this is just a merge, first step in DUS paper
- next step, if evaluation proves that its at least as 'smart' as base model, should be finetuning to 'recover' after merge
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TeeZee__GALAXY-XB-v.01)
| Metric |Value|
|---------------------------------|----:|
|Avg. |62.87|
|AI2 Reasoning Challenge (25-Shot)|60.92|
|HellaSwag (10-Shot) |82.92|
|MMLU (5-Shot) |65.11|
|TruthfulQA (0-shot) |43.67|
|Winogrande (5-shot) |81.14|
|GSM8k (5-shot) |43.44|
|
numen-tech/TinyLlama-1.1B-Chat-v1.0-w4a16g128asym | numen-tech | 2024-03-10T22:21:47Z | 0 | 0 | null | [
"arxiv:2308.13137",
"license:apache-2.0",
"region:us"
] | null | 2024-03-10T22:17:19Z | ---
license: apache-2.0
---
4-bit [OmniQuant](https://arxiv.org/abs/2308.13137) quantized version of [TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0).
|
thomasolav/distilbert-base-uncased-finetuned-cola | thomasolav | 2024-03-10T22:11:32Z | 14 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-10T21:56:27Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8412
- Matthews Correlation: 0.5340
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5208 | 1.0 | 535 | 0.4576 | 0.4452 |
| 0.3435 | 2.0 | 1070 | 0.4613 | 0.5168 |
| 0.2338 | 3.0 | 1605 | 0.6399 | 0.5195 |
| 0.1753 | 4.0 | 2140 | 0.8412 | 0.5340 |
| 0.1295 | 5.0 | 2675 | 0.8539 | 0.5305 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
sweetfelinity/ppo-SnowballTarget | sweetfelinity | 2024-03-10T22:00:03Z | 2 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2024-03-10T22:00:00Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: sweetfelinity/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
grace-pro/one_half_data_high_rank_v2 | grace-pro | 2024-03-10T21:55:21Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-03-10T21:53:43Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
metrics:
- precision
- recall
- accuracy
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: one_half_data_high_rank_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# one_half_data_high_rank_v2
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6564
- Precision: 0.8403
- Recall: 0.9383
- F1-score: 0.8866
- Accuracy: 0.8450
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1-score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:--------:|:--------:|
| 0.535 | 1.0 | 24544 | 0.6564 | 0.8403 | 0.9383 | 0.8866 | 0.8450 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
roiyeho/bart-large-samsum | roiyeho | 2024-03-10T21:54:35Z | 98 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"summarization",
"generated_from_trainer",
"base_model:facebook/bart-large-cnn",
"base_model:finetune:facebook/bart-large-cnn",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2024-02-19T04:34:55Z | ---
license: mit
base_model: facebook/bart-large-cnn
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-samsum
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3635
- Rouge1: 0.3962
- Rouge2: 0.2011
- Rougel: 0.3064
- Rougelsum: 0.3064
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 1.3824 | 0.43 | 400 | 1.4666 | 0.3995 | 0.2014 | 0.3061 | 0.3064 |
| 1.2617 | 0.87 | 800 | 1.3350 | 0.4065 | 0.2063 | 0.3113 | 0.3115 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
AwAppp/benchmarks_4bit_batch_size45 | AwAppp | 2024-03-10T21:49:33Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-10T21:49:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
m7n/dierenleven-sdxl-lora-001 | m7n | 2024-03-10T21:48:51Z | 3 | 1 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"diffusers-training",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-03-10T17:23:19Z | ---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- diffusers-training
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: 'litograph in the style of <s0><s1>, showing a beautiful bird of paradise'
output:
url:
"image_0.png"
- text: 'litograph in the style of <s0><s1>, showing a beautiful bird of paradise'
output:
url:
"image_1.png"
- text: 'litograph in the style of <s0><s1>, showing a beautiful bird of paradise'
output:
url:
"image_2.png"
- text: 'litograph in the style of <s0><s1>, showing a beautiful bird of paradise'
output:
url:
"image_3.png"
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: litograph in the style of <s0><s1>, showing a beautiful bird of paradise
license: openrail++
---
# SDXL LoRA DreamBooth - m7n/dierenleven-sdxl-lora-001
<Gallery />
## Model description
### These are m7n/dierenleven-sdxl-lora-001 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`dierenleven-sdxl-lora-001.safetensors` here 💾](/m7n/dierenleven-sdxl-lora-001/blob/main/dierenleven-sdxl-lora-001.safetensors)**.
- Place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:dierenleven-sdxl-lora-001:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
- *Embeddings*: download **[`dierenleven-sdxl-lora-001_emb.safetensors` here 💾](/m7n/dierenleven-sdxl-lora-001/blob/main/dierenleven-sdxl-lora-001_emb.safetensors)**.
- Place it on it on your `embeddings` folder
- Use it by adding `dierenleven-sdxl-lora-001_emb` to your prompt. For example, `litograph in the style of dierenleven-sdxl-lora-001_emb, showing a beautiful bird of paradise`
(you need both the LoRA and the embeddings as they were trained together for this LoRA)
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('m7n/dierenleven-sdxl-lora-001', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='m7n/dierenleven-sdxl-lora-001', filename='dierenleven-sdxl-lora-001_emb.safetensors', repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
image = pipeline('litograph in the style of <s0><s1>, showing a beautiful bird of paradise').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` → use `<s0><s1>` in your prompt
## Details
All [Files & versions](/m7n/dierenleven-sdxl-lora-001/tree/main).
The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
WhiteRabbitNeo/WhiteRabbitNeo-7B-v1.5a | WhiteRabbitNeo | 2024-03-10T21:45:53Z | 98 | 47 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-18T02:33:26Z | ---
license: other
license_name: deepseek
license_link: https://huggingface.co/deepseek-ai/deepseek-coder-33b-base/blob/main/LICENSE
---
# Our latest 33B model is live (We'll always be serving the newest model on our web app)!
Access at: https://www.whiterabbitneo.com/
# Our Discord Server
Join us at: https://discord.gg/8Ynkrcbk92 (Updated on Dec 29th. Now permanent link to join)
# DeepSeek Coder Licence + WhiteRabbitNeo Extended Version
# Licence: Usage Restrictions
```
You agree not to use the Model or Derivatives of the Model:
- In any way that violates any applicable national or international law or regulation or infringes upon the lawful rights and interests of any third party;
- For military use in any way;
- For the purpose of exploiting, harming or attempting to exploit or harm minors in any way;
- To generate or disseminate verifiably false information and/or content with the purpose of harming others;
- To generate or disseminate inappropriate content subject to applicable regulatory requirements;
- To generate or disseminate personal identifiable information without due authorization or for unreasonable use;
- To defame, disparage or otherwise harass others;
- For fully automated decision making that adversely impacts an individual’s legal rights or otherwise creates or modifies a binding, enforceable obligation;
- For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics;
- To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
- For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories.
```
# Topics Covered:
```
- Open Ports: Identifying open ports is crucial as they can be entry points for attackers. Common ports to check include HTTP (80, 443), FTP (21), SSH (22), and SMB (445).
- Outdated Software or Services: Systems running outdated software or services are often vulnerable to exploits. This includes web servers, database servers, and any third-party software.
- Default Credentials: Many systems and services are installed with default usernames and passwords, which are well-known and can be easily exploited.
- Misconfigurations: Incorrectly configured services, permissions, and security settings can introduce vulnerabilities.
- Injection Flaws: SQL injection, command injection, and cross-site scripting (XSS) are common issues in web applications.
- Unencrypted Services: Services that do not use encryption (like HTTP instead of HTTPS) can expose sensitive data.
- Known Software Vulnerabilities: Checking for known vulnerabilities in software using databases like the National Vulnerability Database (NVD) or tools like Nessus or OpenVAS.
- Cross-Site Request Forgery (CSRF): This is where unauthorized commands are transmitted from a user that the web application trusts.
- Insecure Direct Object References: This occurs when an application provides direct access to objects based on user-supplied input.
- Security Misconfigurations in Web Servers/Applications: This includes issues like insecure HTTP headers or verbose error messages that reveal too much information.
- Broken Authentication and Session Management: This can allow attackers to compromise passwords, keys, or session tokens, or to exploit other implementation flaws to assume other users' identities.
- Sensitive Data Exposure: Includes vulnerabilities that expose sensitive data, such as credit card numbers, health records, or personal information.
- API Vulnerabilities: In modern web applications, APIs are often used and can have vulnerabilities like insecure endpoints or data leakage.
- Denial of Service (DoS) Vulnerabilities: Identifying services that are vulnerable to DoS attacks, which can make the resource unavailable to legitimate users.
- Buffer Overflows: Common in older software, these vulnerabilities can allow an attacker to crash the system or execute arbitrary code.
```
# Terms of Use
By accessing and using this Artificial Intelligence (AI) model, you, the user, acknowledge and agree that you are solely responsible for your use of the model and its outcomes. You hereby agree to indemnify, defend, and hold harmless the creators, developers, and any affiliated persons or entities of this AI model from and against any and all claims, liabilities, damages, losses, costs, expenses, fees (including reasonable attorneys' fees and court costs) that may arise, directly or indirectly, from your use of the AI model.
This AI model is provided "as is" and "as available" without any warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose, and non-infringement. The creators make no warranty that the AI model will meet your requirements or be available on an uninterrupted, secure, or error-free basis.
Your use of the AI model is at your own risk and discretion, and you will be solely responsible for any damage to computer systems or loss of data that results from the use of the AI model.
This disclaimer constitutes part of the agreement between you and the creators of the AI model regarding your use of the model, superseding any prior agreements between you and the creators regarding your use of this AI model.
# WhiteRabbitNeo
<br>

<br>
WhiteRabbitNeo is a model series that can be used for offensive and defensive cybersecurity.
Our models are now getting released as a public preview of its capabilities, and also to assess the societal impact of such an AI.
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "WhiteRabbitNeo/WhiteRabbitNeo-7B-v1.5a"
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="auto",
load_in_4bit=False,
load_in_8bit=True,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
def generate_text(instruction):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.5,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
answer = string.split("USER:")[0].strip()
return f"{answer}"
conversation = f"SYSTEM: You are an AI that code. Answer with code."
while True:
user_input = input("You: ")
llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: "
answer = generate_text(llm_prompt)
print(answer)
conversation = f"{llm_prompt}{answer}"
# print(conversation)
json_data = {"prompt": user_input, "answer": answer}
# print(json_data)
# with open(output_file_path, "a") as output_file:
# output_file.write(json.dumps(json_data) + "\n")
```
|
AwAppp/benchmarks_4bit_batch_size20 | AwAppp | 2024-03-10T21:41:42Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-10T21:41:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
operablepattern/gemma-2b-it-Q | operablepattern | 2024-03-10T21:41:02Z | 11 | 1 | transformers | [
"transformers",
"gguf",
"gemma",
"text-generation",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-07T19:00:03Z | ---
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
---
This repository contains gemma 2B models quantized using llama.cpp.
For details of the model see https://huggingface.co/google/gemma-2b-it.
Details of the k-quants can be found here: https://github.com/ggerganov/llama.cpp/pull/1684
## Provided files
| Name | Quant method | Bits | Size |
| ---- | ---- | ---- | ---- |
| [gemma-2b-it-Q4_K_M.gguf](https://huggingface.co/operablepattern/gemma-2b-it-Q/blob/main/gemma-2b-it-Q4_K_M.gguf) | Q4_K_M | 4 | 1.63 GB|
| [gemma-2b-it-Q5_K_M.gguf](https://huggingface.co/operablepattern/gemma-2b-it-Q/blob/main/gemma-2b-it-Q5_K_M.gguf) | Q5_K_M | 5 | 1.84 GB| |
AwAppp/benchmarks_4bit_batch_size15 | AwAppp | 2024-03-10T21:40:38Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-10T21:40:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
myownip/axolotl-openllama-1k-qlora-v02 | myownip | 2024-03-10T21:40:07Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"llama",
"generated_from_trainer",
"base_model:openlm-research/open_llama_3b_v2",
"base_model:adapter:openlm-research/open_llama_3b_v2",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-03-10T21:40:02Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: openlm-research/open_llama_3b_v2
model-index:
- name: qlora-out
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: openlm-research/open_llama_3b_v2
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
push_dataset_to_hub:
datasets:
- path: mhenrichsen/alpaca_2k_test
type: alpaca
dataset_prepared_path:
val_set_size: 0.05
adapter: qlora
lora_model_dir:
sequence_len: 1024
sample_packing: true
lora_r: 8
lora_alpha: 32
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
output_dir: ./qlora-out
gradient_accumulation_steps: 1
micro_batch_size: 1
num_epochs: 1
optimizer: paged_adamw_32bit
torchdistx_path:
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: false
fp16: true
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
gptq_groupsize:
gptq_model_v1:
warmup_steps: 20
evals_per_epoch: 4
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.1
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```
</details><br>
# qlora-out
This model is a fine-tuned version of [openlm-research/open_llama_3b_v2](https://huggingface.co/openlm-research/open_llama_3b_v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1118
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2567 | 0.0 | 1 | 1.3470 |
| 1.1738 | 0.25 | 108 | 1.1365 |
| 1.113 | 0.5 | 216 | 1.1231 |
| 1.413 | 0.75 | 324 | 1.1118 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.2+cu118
- Datasets 2.18.0
- Tokenizers 0.15.0 |
AwAppp/benchmarks_4bit_batch_size5 | AwAppp | 2024-03-10T21:38:29Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-10T21:38:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
quirky-lats-at-mats/Llama-13b-IHY-CoT | quirky-lats-at-mats | 2024-03-10T21:37:21Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T21:18:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SDFASDGA/llm | SDFASDGA | 2024-03-10T21:37:07Z | 10 | 1 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2023-11-11T11:30:08Z | Models for llm.f90 - LLMs in Fortran
See Files and https://github.com/rbitr/llm.f90 and https://github.com/rbitr/ferrite for more detail
|
automerger/ShadowCalme-7B | automerger | 2024-03-10T21:33:15Z | 7 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"base_model:CorticalStack/shadow-clown-7B-dare",
"base_model:merge:CorticalStack/shadow-clown-7B-dare",
"base_model:MaziyarPanahi/Calme-7B-Instruct-v0.1.1",
"base_model:merge:MaziyarPanahi/Calme-7B-Instruct-v0.1.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T21:32:28Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- automerger
base_model:
- CorticalStack/shadow-clown-7B-dare
- MaziyarPanahi/Calme-7B-Instruct-v0.1.1
---
# ShadowCalme-7B
ShadowCalme-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
* [CorticalStack/shadow-clown-7B-dare](https://huggingface.co/CorticalStack/shadow-clown-7B-dare)
* [MaziyarPanahi/Calme-7B-Instruct-v0.1.1](https://huggingface.co/MaziyarPanahi/Calme-7B-Instruct-v0.1.1)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: CorticalStack/shadow-clown-7B-dare
layer_range: [0, 32]
- model: MaziyarPanahi/Calme-7B-Instruct-v0.1.1
layer_range: [0, 32]
merge_method: slerp
base_model: CorticalStack/shadow-clown-7B-dare
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/ShadowCalme-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
gtroina/sd-class-butterflies-32 | gtroina | 2024-03-10T21:32:16Z | 44 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2024-03-10T20:54:44Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('gtroina/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
macadeliccc/laser-dolphin-mixtral-4x7b-dpo-AWQ | macadeliccc | 2024-03-10T21:28:48Z | 8 | 1 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"base_model:macadeliccc/laser-dolphin-mixtral-4x7b-dpo",
"base_model:quantized:macadeliccc/laser-dolphin-mixtral-4x7b-dpo",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] | text-generation | 2024-03-10T20:49:35Z | ---
license: apache-2.0
base_model: macadeliccc/laser-dolphin-mixtral-4x7b-dpo
---
## OpenAI compatible endpoint using VLLM
Runs well on 4090
```
python -m vllm.entrypoints.openai.api_server --model macadeliccc/laser-dolphin-mixtral-4x7b-dpo-AWQ --max-model-len 25000
``` |
tsavage68/mistralit2_1000_STEPS_1e8_rate_0.1_beta_DPO | tsavage68 | 2024-03-10T21:22:33Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T21:18:48Z | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.2
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: mistralit2_1000_STEPS_1e8_rate_0.1_beta_DPO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistralit2_1000_STEPS_1e8_rate_0.1_beta_DPO
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6920
- Rewards/chosen: -0.0058
- Rewards/rejected: -0.0082
- Rewards/accuracies: 0.5121
- Rewards/margins: 0.0024
- Logps/rejected: -28.6543
- Logps/chosen: -23.4436
- Logits/rejected: -2.8649
- Logits/chosen: -2.8652
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-08
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.693 | 0.1 | 50 | 0.6928 | 0.0007 | -0.0000 | 0.4549 | 0.0007 | -28.5728 | -23.3792 | -2.8652 | -2.8654 |
| 0.693 | 0.2 | 100 | 0.6920 | 0.0012 | -0.0011 | 0.4945 | 0.0023 | -28.5838 | -23.3741 | -2.8653 | -2.8655 |
| 0.693 | 0.29 | 150 | 0.6923 | -0.0015 | -0.0033 | 0.4989 | 0.0018 | -28.6052 | -23.4006 | -2.8651 | -2.8653 |
| 0.694 | 0.39 | 200 | 0.6923 | -0.0020 | -0.0037 | 0.4813 | 0.0017 | -28.6093 | -23.4058 | -2.8651 | -2.8653 |
| 0.6916 | 0.49 | 250 | 0.6922 | -0.0026 | -0.0046 | 0.4879 | 0.0021 | -28.6189 | -23.4118 | -2.8651 | -2.8654 |
| 0.6927 | 0.59 | 300 | 0.6920 | -0.0039 | -0.0063 | 0.5011 | 0.0023 | -28.6350 | -23.4253 | -2.8650 | -2.8653 |
| 0.6941 | 0.68 | 350 | 0.6927 | -0.0048 | -0.0058 | 0.4659 | 0.0010 | -28.6304 | -23.4334 | -2.8650 | -2.8652 |
| 0.6924 | 0.78 | 400 | 0.6922 | -0.0049 | -0.0068 | 0.4989 | 0.0019 | -28.6399 | -23.4345 | -2.8650 | -2.8653 |
| 0.6919 | 0.88 | 450 | 0.6918 | -0.0056 | -0.0084 | 0.4857 | 0.0028 | -28.6562 | -23.4418 | -2.8650 | -2.8653 |
| 0.6913 | 0.98 | 500 | 0.6913 | -0.0047 | -0.0085 | 0.5077 | 0.0038 | -28.6577 | -23.4328 | -2.8649 | -2.8652 |
| 0.6914 | 1.07 | 550 | 0.6915 | -0.0034 | -0.0067 | 0.5143 | 0.0033 | -28.6398 | -23.4200 | -2.8650 | -2.8653 |
| 0.6939 | 1.17 | 600 | 0.6922 | -0.0069 | -0.0089 | 0.5033 | 0.0020 | -28.6613 | -23.4550 | -2.8650 | -2.8652 |
| 0.6917 | 1.27 | 650 | 0.6920 | -0.0056 | -0.0081 | 0.5231 | 0.0025 | -28.6535 | -23.4422 | -2.8650 | -2.8653 |
| 0.6919 | 1.37 | 700 | 0.6921 | -0.0052 | -0.0074 | 0.5055 | 0.0021 | -28.6463 | -23.4383 | -2.8650 | -2.8653 |
| 0.6929 | 1.46 | 750 | 0.6915 | -0.0044 | -0.0078 | 0.5363 | 0.0034 | -28.6506 | -23.4298 | -2.8650 | -2.8653 |
| 0.6919 | 1.56 | 800 | 0.6922 | -0.0063 | -0.0083 | 0.5209 | 0.0020 | -28.6553 | -23.4489 | -2.8649 | -2.8652 |
| 0.6925 | 1.66 | 850 | 0.6921 | -0.0058 | -0.0080 | 0.5121 | 0.0022 | -28.6528 | -23.4438 | -2.8649 | -2.8652 |
| 0.6925 | 1.76 | 900 | 0.6920 | -0.0058 | -0.0082 | 0.5121 | 0.0024 | -28.6543 | -23.4436 | -2.8649 | -2.8652 |
| 0.6939 | 1.86 | 950 | 0.6920 | -0.0058 | -0.0082 | 0.5121 | 0.0024 | -28.6543 | -23.4436 | -2.8649 | -2.8652 |
| 0.6924 | 1.95 | 1000 | 0.6920 | -0.0058 | -0.0082 | 0.5121 | 0.0024 | -28.6543 | -23.4436 | -2.8649 | -2.8652 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.0.0+cu117
- Datasets 2.18.0
- Tokenizers 0.15.2
|
AmineSaidi-ISTIC/phi-2-finetuned-knowledgator-events_classification | AmineSaidi-ISTIC | 2024-03-10T21:21:55Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-03-06T13:56:06Z | ---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: microsoft/phi-2
model-index:
- name: phi-2-finetuned-knowledgator-events_classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2-finetuned-knowledgator-events_classification
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 1000
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.2 |
prince-canuma/babyLlama | prince-canuma | 2024-03-10T21:15:54Z | 93 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T21:15:15Z | ---
tags:
- generated_from_trainer
model-index:
- name: babyLlama
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# babyLlama
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- num_epochs: 2
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Felladrin/gguf-TinyMistral-248M-Chat-v1 | Felladrin | 2024-03-10T21:11:21Z | 25 | 0 | null | [
"gguf",
"base_model:Felladrin/TinyMistral-248M-Chat-v2",
"base_model:quantized:Felladrin/TinyMistral-248M-Chat-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-03-10T20:27:47Z | ---
license: apache-2.0
base_model: Felladrin/TinyMistral-248M-Chat-v1
---
GGUF version of [Felladrin/TinyMistral-248M-Chat-v1](https://huggingface.co/Felladrin/TinyMistral-248M-Chat-v1).
|
bartowski/speechless-starcoder2-7b-exl2 | bartowski | 2024-03-10T20:55:17Z | 0 | 1 | transformers | [
"transformers",
"code",
"text-generation",
"en",
"dataset:teknium/OpenHermes-2.5",
"dataset:TokenBender/python_eval_instruct_51k",
"dataset:codefuse-ai/Evol-instruction-66k",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T20:41:05Z | ---
language:
- en
library_name: transformers
pipeline_tag: text-generation
datasets:
- teknium/OpenHermes-2.5
- TokenBender/python_eval_instruct_51k
- codefuse-ai/Evol-instruction-66k
tags:
- code
license: apache-2.0
model-index:
- name: SpeechlessCoder
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value: 0.0
verified: false
quantized_by: bartowski
---
## Exllama v2 Quantizations of speechless-starcoder2-7b
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.15">turboderp's ExLlamaV2 v0.0.15</a> for quantization.
## The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Conversion was done using the default calibration dataset.
Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.
Original model: https://huggingface.co/uukuguy/speechless-starcoder2-7b
<a href="https://huggingface.co/bartowski/speechless-starcoder2-7b-exl2/tree/8_0">8.0 bits per weight</a>
<a href="https://huggingface.co/bartowski/speechless-starcoder2-7b-exl2/tree/6_5">6.5 bits per weight</a>
<a href="https://huggingface.co/bartowski/speechless-starcoder2-7b-exl2/tree/5_0">5.0 bits per weight</a>
<a href="https://huggingface.co/bartowski/speechless-starcoder2-7b-exl2/tree/4_25">4.25 bits per weight</a>
<a href="https://huggingface.co/bartowski/speechless-starcoder2-7b-exl2/tree/3_5">3.5 bits per weight</a>
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/speechless-starcoder2-7b-exl2
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `speechless-starcoder2-7b-exl2`:
```shell
mkdir speechless-starcoder2-7b-exl2
huggingface-cli download bartowski/speechless-starcoder2-7b-exl2 --local-dir speechless-starcoder2-7b-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
Linux:
```shell
mkdir speechless-starcoder2-7b-exl2-6_5
huggingface-cli download bartowski/speechless-starcoder2-7b-exl2 --revision 6_5 --local-dir speechless-starcoder2-7b-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
mkdir speechless-starcoder2-7b-exl2-6.5
huggingface-cli download bartowski/speechless-starcoder2-7b-exl2 --revision 6_5 --local-dir speechless-starcoder2-7b-exl2-6.5 --local-dir-use-symlinks False
``` |
CalebCometML/andrew-test | CalebCometML | 2024-03-10T20:49:33Z | 1 | 2 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"dora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-03-10T20:40:33Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- dora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of TOK man
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - CalebCometML/andrew-test
<Gallery />
## Model description
These are CalebCometML/andrew-test LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK man to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](CalebCometML/andrew-test/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
bartowski/dolphincoder-starcoder2-7b-exl2 | bartowski | 2024-03-10T20:43:19Z | 2 | 2 | null | [
"text-generation",
"en",
"dataset:cognitivecomputations/dolphin",
"dataset:jondurbin/airoboros-2.2.1",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:teknium/openhermes",
"dataset:ise-uiuc/Magicoder-OSS-Instruct-75K",
"dataset:ise-uiuc/Magicoder-Evol-Instruct-110K",
"dataset:m-a-p/Code-Feedback",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:microsoft/orca-math-word-problems-200k",
"license:bigcode-openrail-m",
"region:us"
] | text-generation | 2024-03-10T16:05:14Z | ---
datasets:
- cognitivecomputations/dolphin
- jondurbin/airoboros-2.2.1
- cognitivecomputations/dolphin-coder
- teknium/openhermes
- ise-uiuc/Magicoder-OSS-Instruct-75K
- ise-uiuc/Magicoder-Evol-Instruct-110K
- m-a-p/Code-Feedback
- m-a-p/CodeFeedback-Filtered-Instruction
- microsoft/orca-math-word-problems-200k
language:
- en
license: bigcode-openrail-m
quantized_by: bartowski
pipeline_tag: text-generation
---
## Exllama v2 Quantizations of dolphincoder-starcoder2-7b
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.15">turboderp's ExLlamaV2 v0.0.15</a> for quantization.
## The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/cognitivecomputations/dolphincoder-starcoder2-7b
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
| ----- | ---- | ------- | ------ | ------ | ------ | ------------ |
| [8_0](https://huggingface.co/bartowski/dolphincoder-starcoder2-7b-exl2/tree/8_0) | 8.0 | 8.0 | 8.4 GB | 9.2 GB | 10.2 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/bartowski/dolphincoder-starcoder2-7b-exl2/tree/6_5) | 6.5 | 8.0 | 7.1 GB | 7.9 GB | 8.9 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/bartowski/dolphincoder-starcoder2-7b-exl2/tree/5_0) | 5.0 | 6.0 | 5.8 GB | 6.6 GB | 7.6 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
| [4_25](https://huggingface.co/bartowski/dolphincoder-starcoder2-7b-exl2/tree/4_25) | 4.25 | 6.0 | 5.1 GB | 5.9 GB | 6.9 GB | GPTQ equivalent bits per weight, slightly higher quality. |
| [3_5](https://huggingface.co/bartowski/dolphincoder-starcoder2-7b-exl2/tree/3_5) | 3.5 | 6.0 | 4.5 GB | 5.3 GB | 6.3 GB | Lower quality, only use if you have to. |
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/dolphincoder-starcoder2-7b-exl2
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `dolphincoder-starcoder2-7b-exl2`:
```shell
mkdir dolphincoder-starcoder2-7b-exl2
huggingface-cli download bartowski/dolphincoder-starcoder2-7b-exl2 --local-dir dolphincoder-starcoder2-7b-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
Linux:
```shell
mkdir dolphincoder-starcoder2-7b-exl2-6_5
huggingface-cli download bartowski/dolphincoder-starcoder2-7b-exl2 --revision 6_5 --local-dir dolphincoder-starcoder2-7b-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
mkdir dolphincoder-starcoder2-7b-exl2-6.5
huggingface-cli download bartowski/dolphincoder-starcoder2-7b-exl2 --revision 6_5 --local-dir dolphincoder-starcoder2-7b-exl2-6.5 --local-dir-use-symlinks False
``` |
KnutJaegersberg/B1-66ER | KnutJaegersberg | 2024-03-10T20:40:40Z | 0 | 1 | null | [
"region:us"
] | null | 2024-03-10T10:06:01Z | 
|
sarahahtee/classification_flan_t5_base_enriched | sarahahtee | 2024-03-10T20:36:31Z | 95 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-03-10T20:18:40Z | ---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
model-index:
- name: classification_flan_t5_base_enriched
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# classification_flan_t5_base_enriched
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8345
- Classification Report: precision recall f1-score support
ALLERGY 1.0000 0.9167 0.9565 12
ASSESSMENT 0.0000 0.0000 0.0000 11
CC 0.3333 0.1818 0.2353 11
DIAGNOSIS 0.0000 0.0000 0.0000 1
DISPOSITION 0.2000 1.0000 0.3333 1
EDCOURSE 0.0000 0.0000 0.0000 4
EXAM 0.5000 0.2000 0.2857 5
FAM/SOCHX 0.8400 0.9333 0.8842 45
GENHX 0.6620 0.8868 0.7581 53
GYNHX 1.0000 1.0000 1.0000 1
IMAGING 0.3333 1.0000 0.5000 1
IMMUNIZATIONS 1.0000 1.0000 1.0000 1
LABS 0.0000 0.0000 0.0000 1
MEDICATIONS 0.8333 1.0000 0.9091 10
OTHER_HISTORY 0.0000 0.0000 0.0000 3
PASTMEDICALHX 0.6471 0.7857 0.7097 14
PASTSURGICAL 1.0000 0.8571 0.9231 7
PLAN 0.0000 0.0000 0.0000 1
PROCEDURES 0.0000 0.0000 0.0000 1
ROS 0.8462 0.6471 0.7333 17
accuracy 0.7250 200
macro avg 0.4598 0.5204 0.4614 200
weighted avg 0.6618 0.7250 0.6812 200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 7
- eval_batch_size: 7
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Classification Report |
|:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| No log | 1.0 | 172 | 0.4392 | precision recall f1-score support
ALLERGY 1.0000 0.9167 0.9565 12
ASSESSMENT 0.0000 0.0000 0.0000 11
CC 0.1455 0.7273 0.2424 11
DIAGNOSIS 0.0000 0.0000 0.0000 1
DISPOSITION 0.0000 0.0000 0.0000 1
EDCOURSE 0.0000 0.0000 0.0000 4
EXAM 0.0000 0.0000 0.0000 5
FAM/SOCHX 0.8148 0.9778 0.8889 45
GENHX 1.0000 0.3962 0.5676 53
GYNHX 0.0000 0.0000 0.0000 1
IMAGING 0.0000 0.0000 0.0000 1
IMMUNIZATIONS 0.0000 0.0000 0.0000 1
LABS 0.0000 0.0000 0.0000 1
MEDICATIONS 0.8333 1.0000 0.9091 10
OTHER_HISTORY 0.0000 0.0000 0.0000 3
PASTMEDICALHX 0.4000 0.8571 0.5455 14
PASTSURGICAL 0.7778 1.0000 0.8750 7
PLAN 0.0000 0.0000 0.0000 1
PROCEDURES 0.0000 0.0000 0.0000 1
ROS 0.6250 0.2941 0.4000 17
accuracy 0.5900 200
macro avg 0.2798 0.3085 0.2692 200
weighted avg 0.6663 0.5900 0.5694 200
|
| No log | 2.0 | 344 | 0.3438 | precision recall f1-score support
ALLERGY 1.0000 0.9167 0.9565 12
ASSESSMENT 0.0000 0.0000 0.0000 11
CC 0.5000 0.2727 0.3529 11
DIAGNOSIS 0.0000 0.0000 0.0000 1
DISPOSITION 0.2000 1.0000 0.3333 1
EDCOURSE 0.0000 0.0000 0.0000 4
EXAM 0.0000 0.0000 0.0000 5
FAM/SOCHX 0.9535 0.9111 0.9318 45
GENHX 0.5833 0.9245 0.7153 53
GYNHX 0.0000 0.0000 0.0000 1
IMAGING 0.0000 0.0000 0.0000 1
IMMUNIZATIONS 0.5000 1.0000 0.6667 1
LABS 0.0000 0.0000 0.0000 1
MEDICATIONS 0.8333 1.0000 0.9091 10
OTHER_HISTORY 0.0000 0.0000 0.0000 3
PASTMEDICALHX 0.6364 0.5000 0.5600 14
PASTSURGICAL 0.5000 1.0000 0.6667 7
PLAN 0.0000 0.0000 0.0000 1
PROCEDURES 0.0000 0.0000 0.0000 1
ROS 0.8333 0.5882 0.6897 17
accuracy 0.7000 200
macro avg 0.3270 0.4057 0.3391 200
weighted avg 0.6347 0.7000 0.6476 200
|
| 0.3577 | 3.0 | 516 | 0.3416 | precision recall f1-score support
ALLERGY 1.0000 0.9167 0.9565 12
ASSESSMENT 0.0000 0.0000 0.0000 11
CC 0.2083 0.4545 0.2857 11
DIAGNOSIS 0.0000 0.0000 0.0000 1
DISPOSITION 0.3333 1.0000 0.5000 1
EDCOURSE 0.0000 0.0000 0.0000 4
EXAM 1.0000 0.2000 0.3333 5
FAM/SOCHX 0.8696 0.8889 0.8791 45
GENHX 0.6769 0.8302 0.7458 53
GYNHX 0.0000 0.0000 0.0000 1
IMAGING 0.0000 0.0000 0.0000 1
IMMUNIZATIONS 1.0000 1.0000 1.0000 1
LABS 0.0000 0.0000 0.0000 1
MEDICATIONS 0.8333 1.0000 0.9091 10
OTHER_HISTORY 0.0000 0.0000 0.0000 3
PASTMEDICALHX 0.5500 0.7857 0.6471 14
PASTSURGICAL 1.0000 1.0000 1.0000 7
PLAN 0.0000 0.0000 0.0000 1
PROCEDURES 0.0000 0.0000 0.0000 1
ROS 0.8000 0.4706 0.5926 17
accuracy 0.6950 200
macro avg 0.4136 0.4273 0.3925 200
weighted avg 0.6613 0.6950 0.6605 200
|
| 0.3577 | 4.0 | 688 | 0.4118 | precision recall f1-score support
ALLERGY 1.0000 0.9167 0.9565 12
ASSESSMENT 0.0000 0.0000 0.0000 11
CC 0.3333 0.0909 0.1429 11
DIAGNOSIS 0.0000 0.0000 0.0000 1
DISPOSITION 0.1429 1.0000 0.2500 1
EDCOURSE 0.0000 0.0000 0.0000 4
EXAM 0.5000 0.2000 0.2857 5
FAM/SOCHX 0.8431 0.9556 0.8958 45
GENHX 0.6438 0.8868 0.7460 53
GYNHX 0.0000 0.0000 0.0000 1
IMAGING 1.0000 1.0000 1.0000 1
IMMUNIZATIONS 1.0000 1.0000 1.0000 1
LABS 0.0000 0.0000 0.0000 1
MEDICATIONS 0.8333 1.0000 0.9091 10
OTHER_HISTORY 0.0000 0.0000 0.0000 3
PASTMEDICALHX 0.6667 0.7143 0.6897 14
PASTSURGICAL 0.7778 1.0000 0.8750 7
PLAN 0.0000 0.0000 0.0000 1
PROCEDURES 0.0000 0.0000 0.0000 1
ROS 0.8571 0.7059 0.7742 17
accuracy 0.7250 200
macro avg 0.4299 0.4735 0.4262 200
weighted avg 0.6503 0.7250 0.6731 200
|
| 0.3577 | 5.0 | 860 | 0.4030 | precision recall f1-score support
ALLERGY 1.0000 0.9167 0.9565 12
ASSESSMENT 0.0000 0.0000 0.0000 11
CC 0.2778 0.4545 0.3448 11
DIAGNOSIS 0.0000 0.0000 0.0000 1
DISPOSITION 0.3333 1.0000 0.5000 1
EDCOURSE 0.0000 0.0000 0.0000 4
EXAM 0.6667 0.4000 0.5000 5
FAM/SOCHX 0.8511 0.8889 0.8696 45
GENHX 0.7077 0.8679 0.7797 53
GYNHX 1.0000 1.0000 1.0000 1
IMAGING 1.0000 1.0000 1.0000 1
IMMUNIZATIONS 1.0000 1.0000 1.0000 1
LABS 0.0000 0.0000 0.0000 1
MEDICATIONS 0.8333 1.0000 0.9091 10
OTHER_HISTORY 0.0000 0.0000 0.0000 3
PASTMEDICALHX 0.7333 0.7857 0.7586 14
PASTSURGICAL 1.0000 0.8571 0.9231 7
PLAN 0.0000 0.0000 0.0000 1
PROCEDURES 0.0000 0.0000 0.0000 1
ROS 0.7500 0.7059 0.7273 17
accuracy 0.7350 200
macro avg 0.5077 0.5438 0.5134 200
weighted avg 0.6794 0.7350 0.7013 200
|
| 0.0808 | 6.0 | 1032 | 0.5599 | precision recall f1-score support
ALLERGY 1.0000 0.9167 0.9565 12
ASSESSMENT 0.0000 0.0000 0.0000 11
CC 0.2500 0.0909 0.1333 11
DIAGNOSIS 0.0000 0.0000 0.0000 1
DISPOSITION 0.1250 1.0000 0.2222 1
EDCOURSE 0.0000 0.0000 0.0000 4
EXAM 0.5000 0.2000 0.2857 5
FAM/SOCHX 0.8511 0.8889 0.8696 45
GENHX 0.6267 0.8868 0.7344 53
GYNHX 1.0000 1.0000 1.0000 1
IMAGING 1.0000 1.0000 1.0000 1
IMMUNIZATIONS 1.0000 1.0000 1.0000 1
LABS 0.0000 0.0000 0.0000 1
MEDICATIONS 0.9091 1.0000 0.9524 10
OTHER_HISTORY 0.0000 0.0000 0.0000 3
PASTMEDICALHX 0.5882 0.7143 0.6452 14
PASTSURGICAL 0.8750 1.0000 0.9333 7
PLAN 0.0000 0.0000 0.0000 1
PROCEDURES 0.0000 0.0000 0.0000 1
ROS 0.8462 0.6471 0.7333 17
accuracy 0.7100 200
macro avg 0.4786 0.5172 0.4733 200
weighted avg 0.6486 0.7100 0.6660 200
|
| 0.0808 | 7.0 | 1204 | 0.5132 | precision recall f1-score support
ALLERGY 1.0000 0.9167 0.9565 12
ASSESSMENT 0.0000 0.0000 0.0000 11
CC 0.3333 0.2727 0.3000 11
DIAGNOSIS 0.0000 0.0000 0.0000 1
DISPOSITION 0.1429 1.0000 0.2500 1
EDCOURSE 0.0000 0.0000 0.0000 4
EXAM 0.5000 0.2000 0.2857 5
FAM/SOCHX 0.8542 0.9111 0.8817 45
GENHX 0.6761 0.9057 0.7742 53
GYNHX 0.0000 0.0000 0.0000 1
IMAGING 1.0000 1.0000 1.0000 1
IMMUNIZATIONS 0.5000 1.0000 0.6667 1
LABS 0.0000 0.0000 0.0000 1
MEDICATIONS 0.8333 1.0000 0.9091 10
OTHER_HISTORY 0.0000 0.0000 0.0000 3
PASTMEDICALHX 0.7857 0.7857 0.7857 14
PASTSURGICAL 0.8750 1.0000 0.9333 7
PLAN 0.0000 0.0000 0.0000 1
PROCEDURES 0.0000 0.0000 0.0000 1
ROS 0.7857 0.6471 0.7097 17
accuracy 0.7300 200
macro avg 0.4143 0.4819 0.4226 200
weighted avg 0.6645 0.7300 0.6876 200
|
| 0.0808 | 8.0 | 1376 | 0.6372 | precision recall f1-score support
ALLERGY 1.0000 0.9167 0.9565 12
ASSESSMENT 0.0000 0.0000 0.0000 11
CC 0.3333 0.4545 0.3846 11
DIAGNOSIS 0.0000 0.0000 0.0000 1
DISPOSITION 0.2500 1.0000 0.4000 1
EDCOURSE 0.0000 0.0000 0.0000 4
EXAM 0.5000 0.2000 0.2857 5
FAM/SOCHX 0.8400 0.9333 0.8842 45
GENHX 0.7302 0.8679 0.7931 53
GYNHX 1.0000 1.0000 1.0000 1
IMAGING 0.5000 1.0000 0.6667 1
IMMUNIZATIONS 0.5000 1.0000 0.6667 1
LABS 0.0000 0.0000 0.0000 1
MEDICATIONS 0.8333 1.0000 0.9091 10
OTHER_HISTORY 0.0000 0.0000 0.0000 3
PASTMEDICALHX 0.7333 0.7857 0.7586 14
PASTSURGICAL 1.0000 0.8571 0.9231 7
PLAN 0.0000 0.0000 0.0000 1
PROCEDURES 0.0000 0.0000 0.0000 1
ROS 0.7857 0.6471 0.7097 17
accuracy 0.7350 200
macro avg 0.4503 0.5331 0.4669 200
weighted avg 0.6794 0.7350 0.6997 200
|
| 0.034 | 9.0 | 1548 | 0.7346 | precision recall f1-score support
ALLERGY 1.0000 0.9167 0.9565 12
ASSESSMENT 0.0000 0.0000 0.0000 11
CC 0.3333 0.0909 0.1429 11
DIAGNOSIS 0.0000 0.0000 0.0000 1
DISPOSITION 0.1250 1.0000 0.2222 1
EDCOURSE 0.0000 0.0000 0.0000 4
EXAM 0.5000 0.2000 0.2857 5
FAM/SOCHX 0.8542 0.9111 0.8817 45
GENHX 0.6026 0.8868 0.7176 53
GYNHX 1.0000 1.0000 1.0000 1
IMAGING 1.0000 1.0000 1.0000 1
IMMUNIZATIONS 1.0000 1.0000 1.0000 1
LABS 0.0000 0.0000 0.0000 1
MEDICATIONS 0.8333 1.0000 0.9091 10
OTHER_HISTORY 0.0000 0.0000 0.0000 3
PASTMEDICALHX 0.6429 0.6429 0.6429 14
PASTSURGICAL 1.0000 0.8571 0.9231 7
PLAN 0.0000 0.0000 0.0000 1
PROCEDURES 0.0000 0.0000 0.0000 1
ROS 0.7692 0.5882 0.6667 17
accuracy 0.7000 200
macro avg 0.4830 0.5047 0.4674 200
weighted avg 0.6454 0.7000 0.6565 200
|
| 0.034 | 10.0 | 1720 | 0.6654 | precision recall f1-score support
ALLERGY 1.0000 0.9167 0.9565 12
ASSESSMENT 0.0000 0.0000 0.0000 11
CC 0.4286 0.5455 0.4800 11
DIAGNOSIS 0.0000 0.0000 0.0000 1
DISPOSITION 0.2000 1.0000 0.3333 1
EDCOURSE 0.0000 0.0000 0.0000 4
EXAM 0.5000 0.2000 0.2857 5
FAM/SOCHX 0.8600 0.9556 0.9053 45
GENHX 0.7302 0.8679 0.7931 53
GYNHX 1.0000 1.0000 1.0000 1
IMAGING 1.0000 1.0000 1.0000 1
IMMUNIZATIONS 1.0000 1.0000 1.0000 1
LABS 0.0000 0.0000 0.0000 1
MEDICATIONS 0.8333 1.0000 0.9091 10
OTHER_HISTORY 0.0000 0.0000 0.0000 3
PASTMEDICALHX 0.6667 0.7143 0.6897 14
PASTSURGICAL 1.0000 0.8571 0.9231 7
PLAN 0.0000 0.0000 0.0000 1
PROCEDURES 0.0000 0.0000 0.0000 1
ROS 0.8000 0.7059 0.7500 17
accuracy 0.7450 200
macro avg 0.5009 0.5381 0.5013 200
weighted avg 0.6904 0.7450 0.7112 200
|
| 0.034 | 11.0 | 1892 | 0.7455 | precision recall f1-score support
ALLERGY 1.0000 0.9167 0.9565 12
ASSESSMENT 0.0000 0.0000 0.0000 11
CC 0.3333 0.1818 0.2353 11
DIAGNOSIS 0.0000 0.0000 0.0000 1
DISPOSITION 0.1429 1.0000 0.2500 1
EDCOURSE 0.0000 0.0000 0.0000 4
EXAM 0.6667 0.4000 0.5000 5
FAM/SOCHX 0.8511 0.8889 0.8696 45
GENHX 0.6486 0.9057 0.7559 53
GYNHX 1.0000 1.0000 1.0000 1
IMAGING 0.5000 1.0000 0.6667 1
IMMUNIZATIONS 1.0000 1.0000 1.0000 1
LABS 0.0000 0.0000 0.0000 1
MEDICATIONS 0.8333 1.0000 0.9091 10
OTHER_HISTORY 0.0000 0.0000 0.0000 3
PASTMEDICALHX 0.6250 0.7143 0.6667 14
PASTSURGICAL 1.0000 0.8571 0.9231 7
PLAN 0.0000 0.0000 0.0000 1
PROCEDURES 0.0000 0.0000 0.0000 1
ROS 0.8462 0.6471 0.7333 17
accuracy 0.7200 200
macro avg 0.4724 0.5256 0.4733 200
weighted avg 0.6639 0.7200 0.6801 200
|
| 0.0107 | 12.0 | 2064 | 0.7687 | precision recall f1-score support
ALLERGY 1.0000 0.9167 0.9565 12
ASSESSMENT 0.0000 0.0000 0.0000 11
CC 0.4286 0.2727 0.3333 11
DIAGNOSIS 0.0000 0.0000 0.0000 1
DISPOSITION 0.1667 1.0000 0.2857 1
EDCOURSE 0.0000 0.0000 0.0000 4
EXAM 0.6667 0.4000 0.5000 5
FAM/SOCHX 0.8478 0.8667 0.8571 45
GENHX 0.6575 0.9057 0.7619 53
GYNHX 1.0000 1.0000 1.0000 1
IMAGING 0.3333 1.0000 0.5000 1
IMMUNIZATIONS 1.0000 1.0000 1.0000 1
LABS 0.0000 0.0000 0.0000 1
MEDICATIONS 0.8333 1.0000 0.9091 10
OTHER_HISTORY 0.0000 0.0000 0.0000 3
PASTMEDICALHX 0.5882 0.7143 0.6452 14
PASTSURGICAL 1.0000 0.8571 0.9231 7
PLAN 0.0000 0.0000 0.0000 1
PROCEDURES 0.0000 0.0000 0.0000 1
ROS 0.8462 0.6471 0.7333 17
accuracy 0.7200 200
macro avg 0.4684 0.5290 0.4703 200
weighted avg 0.6675 0.7200 0.6822 200
|
| 0.0107 | 13.0 | 2236 | 0.8114 | precision recall f1-score support
ALLERGY 1.0000 0.9167 0.9565 12
ASSESSMENT 0.0000 0.0000 0.0000 11
CC 0.4545 0.4545 0.4545 11
DIAGNOSIS 0.0000 0.0000 0.0000 1
DISPOSITION 0.1429 1.0000 0.2500 1
EDCOURSE 0.0000 0.0000 0.0000 4
EXAM 0.5000 0.2000 0.2857 5
FAM/SOCHX 0.8511 0.8889 0.8696 45
GENHX 0.6620 0.8868 0.7581 53
GYNHX 0.0000 0.0000 0.0000 1
IMAGING 0.5000 1.0000 0.6667 1
IMMUNIZATIONS 1.0000 1.0000 1.0000 1
LABS 0.0000 0.0000 0.0000 1
MEDICATIONS 0.8333 1.0000 0.9091 10
OTHER_HISTORY 0.0000 0.0000 0.0000 3
PASTMEDICALHX 0.6667 0.7143 0.6897 14
PASTSURGICAL 0.8750 1.0000 0.9333 7
PLAN 0.0000 0.0000 0.0000 1
PROCEDURES 0.0000 0.0000 0.0000 1
ROS 0.9167 0.6471 0.7586 17
accuracy 0.7250 200
macro avg 0.4201 0.4854 0.4266 200
weighted avg 0.6695 0.7250 0.6865 200
|
| 0.0107 | 14.0 | 2408 | 0.7763 | precision recall f1-score support
ALLERGY 1.0000 0.9167 0.9565 12
ASSESSMENT 0.0000 0.0000 0.0000 11
CC 0.3636 0.3636 0.3636 11
DIAGNOSIS 0.0000 0.0000 0.0000 1
DISPOSITION 0.2500 1.0000 0.4000 1
EDCOURSE 0.0000 0.0000 0.0000 4
EXAM 0.5000 0.2000 0.2857 5
FAM/SOCHX 0.8511 0.8889 0.8696 45
GENHX 0.6620 0.8868 0.7581 53
GYNHX 1.0000 1.0000 1.0000 1
IMAGING 0.3333 1.0000 0.5000 1
IMMUNIZATIONS 1.0000 1.0000 1.0000 1
LABS 0.0000 0.0000 0.0000 1
MEDICATIONS 0.8333 1.0000 0.9091 10
OTHER_HISTORY 0.0000 0.0000 0.0000 3
PASTMEDICALHX 0.6667 0.7143 0.6897 14
PASTSURGICAL 1.0000 0.8571 0.9231 7
PLAN 0.0000 0.0000 0.0000 1
PROCEDURES 0.0000 0.0000 0.0000 1
ROS 0.7857 0.6471 0.7097 17
accuracy 0.7200 200
macro avg 0.4623 0.5237 0.4683 200
weighted avg 0.6624 0.7200 0.6819 200
|
| 0.0049 | 15.0 | 2580 | 0.8111 | precision recall f1-score support
ALLERGY 1.0000 0.9167 0.9565 12
ASSESSMENT 0.0000 0.0000 0.0000 11
CC 0.4444 0.3636 0.4000 11
DIAGNOSIS 0.0000 0.0000 0.0000 1
DISPOSITION 0.1667 1.0000 0.2857 1
EDCOURSE 0.0000 0.0000 0.0000 4
EXAM 0.5000 0.2000 0.2857 5
FAM/SOCHX 0.8542 0.9111 0.8817 45
GENHX 0.6667 0.9057 0.7680 53
GYNHX 1.0000 1.0000 1.0000 1
IMAGING 0.5000 1.0000 0.6667 1
IMMUNIZATIONS 1.0000 1.0000 1.0000 1
LABS 0.0000 0.0000 0.0000 1
MEDICATIONS 0.8333 1.0000 0.9091 10
OTHER_HISTORY 0.0000 0.0000 0.0000 3
PASTMEDICALHX 0.6667 0.7143 0.6897 14
PASTSURGICAL 1.0000 1.0000 1.0000 7
PLAN 0.0000 0.0000 0.0000 1
PROCEDURES 0.0000 0.0000 0.0000 1
ROS 0.8333 0.5882 0.6897 17
accuracy 0.7300 200
macro avg 0.4733 0.5300 0.4766 200
weighted avg 0.6733 0.7300 0.6906 200
|
| 0.0049 | 16.0 | 2752 | 0.8138 | precision recall f1-score support
ALLERGY 1.0000 0.9167 0.9565 12
ASSESSMENT 0.0000 0.0000 0.0000 11
CC 0.4286 0.2727 0.3333 11
DIAGNOSIS 0.0000 0.0000 0.0000 1
DISPOSITION 0.2000 1.0000 0.3333 1
EDCOURSE 0.0000 0.0000 0.0000 4
EXAM 0.5000 0.2000 0.2857 5
FAM/SOCHX 0.8511 0.8889 0.8696 45
GENHX 0.6575 0.9057 0.7619 53
GYNHX 1.0000 1.0000 1.0000 1
IMAGING 0.3333 1.0000 0.5000 1
IMMUNIZATIONS 1.0000 1.0000 1.0000 1
LABS 0.0000 0.0000 0.0000 1
MEDICATIONS 0.8333 1.0000 0.9091 10
OTHER_HISTORY 0.0000 0.0000 0.0000 3
PASTMEDICALHX 0.6471 0.7857 0.7097 14
PASTSURGICAL 1.0000 0.8571 0.9231 7
PLAN 0.0000 0.0000 0.0000 1
PROCEDURES 0.0000 0.0000 0.0000 1
ROS 0.8462 0.6471 0.7333 17
accuracy 0.7250 200
macro avg 0.4649 0.5237 0.4658 200
weighted avg 0.6684 0.7250 0.6844 200
|
| 0.0049 | 17.0 | 2924 | 0.8209 | precision recall f1-score support
ALLERGY 1.0000 0.9167 0.9565 12
ASSESSMENT 0.0000 0.0000 0.0000 11
CC 0.3750 0.2727 0.3158 11
DIAGNOSIS 0.0000 0.0000 0.0000 1
DISPOSITION 0.2000 1.0000 0.3333 1
EDCOURSE 0.0000 0.0000 0.0000 4
EXAM 0.5000 0.2000 0.2857 5
FAM/SOCHX 0.8400 0.9333 0.8842 45
GENHX 0.6571 0.8679 0.7480 53
GYNHX 1.0000 1.0000 1.0000 1
IMAGING 0.3333 1.0000 0.5000 1
IMMUNIZATIONS 1.0000 1.0000 1.0000 1
LABS 0.0000 0.0000 0.0000 1
MEDICATIONS 0.8333 1.0000 0.9091 10
OTHER_HISTORY 0.0000 0.0000 0.0000 3
PASTMEDICALHX 0.6250 0.7143 0.6667 14
PASTSURGICAL 1.0000 0.8571 0.9231 7
PLAN 0.0000 0.0000 0.0000 1
PROCEDURES 0.0000 0.0000 0.0000 1
ROS 0.8462 0.6471 0.7333 17
accuracy 0.7200 200
macro avg 0.4605 0.5205 0.4628 200
weighted avg 0.6613 0.7200 0.6800 200
|
| 0.0024 | 18.0 | 3096 | 0.8220 | precision recall f1-score support
ALLERGY 1.0000 0.9167 0.9565 12
ASSESSMENT 0.0000 0.0000 0.0000 11
CC 0.4286 0.2727 0.3333 11
DIAGNOSIS 0.0000 0.0000 0.0000 1
DISPOSITION 0.2000 1.0000 0.3333 1
EDCOURSE 0.0000 0.0000 0.0000 4
EXAM 0.5000 0.2000 0.2857 5
FAM/SOCHX 0.8400 0.9333 0.8842 45
GENHX 0.6714 0.8868 0.7642 53
GYNHX 1.0000 1.0000 1.0000 1
IMAGING 0.3333 1.0000 0.5000 1
IMMUNIZATIONS 1.0000 1.0000 1.0000 1
LABS 0.0000 0.0000 0.0000 1
MEDICATIONS 0.8333 1.0000 0.9091 10
OTHER_HISTORY 0.0000 0.0000 0.0000 3
PASTMEDICALHX 0.6471 0.7857 0.7097 14
PASTSURGICAL 1.0000 0.8571 0.9231 7
PLAN 0.0000 0.0000 0.0000 1
PROCEDURES 0.0000 0.0000 0.0000 1
ROS 0.8462 0.6471 0.7333 17
accuracy 0.7300 200
macro avg 0.4650 0.5250 0.4666 200
weighted avg 0.6696 0.7300 0.6883 200
|
| 0.0024 | 19.0 | 3268 | 0.8372 | precision recall f1-score support
ALLERGY 1.0000 0.9167 0.9565 12
ASSESSMENT 0.0000 0.0000 0.0000 11
CC 0.3333 0.1818 0.2353 11
DIAGNOSIS 0.0000 0.0000 0.0000 1
DISPOSITION 0.2000 1.0000 0.3333 1
EDCOURSE 0.0000 0.0000 0.0000 4
EXAM 0.5000 0.2000 0.2857 5
FAM/SOCHX 0.8400 0.9333 0.8842 45
GENHX 0.6620 0.8868 0.7581 53
GYNHX 1.0000 1.0000 1.0000 1
IMAGING 0.3333 1.0000 0.5000 1
IMMUNIZATIONS 1.0000 1.0000 1.0000 1
LABS 0.0000 0.0000 0.0000 1
MEDICATIONS 0.8333 1.0000 0.9091 10
OTHER_HISTORY 0.0000 0.0000 0.0000 3
PASTMEDICALHX 0.6471 0.7857 0.7097 14
PASTSURGICAL 1.0000 0.8571 0.9231 7
PLAN 0.0000 0.0000 0.0000 1
PROCEDURES 0.0000 0.0000 0.0000 1
ROS 0.8462 0.6471 0.7333 17
accuracy 0.7250 200
macro avg 0.4598 0.5204 0.4614 200
weighted avg 0.6618 0.7250 0.6812 200
|
| 0.0024 | 20.0 | 3440 | 0.8345 | precision recall f1-score support
ALLERGY 1.0000 0.9167 0.9565 12
ASSESSMENT 0.0000 0.0000 0.0000 11
CC 0.3333 0.1818 0.2353 11
DIAGNOSIS 0.0000 0.0000 0.0000 1
DISPOSITION 0.2000 1.0000 0.3333 1
EDCOURSE 0.0000 0.0000 0.0000 4
EXAM 0.5000 0.2000 0.2857 5
FAM/SOCHX 0.8400 0.9333 0.8842 45
GENHX 0.6620 0.8868 0.7581 53
GYNHX 1.0000 1.0000 1.0000 1
IMAGING 0.3333 1.0000 0.5000 1
IMMUNIZATIONS 1.0000 1.0000 1.0000 1
LABS 0.0000 0.0000 0.0000 1
MEDICATIONS 0.8333 1.0000 0.9091 10
OTHER_HISTORY 0.0000 0.0000 0.0000 3
PASTMEDICALHX 0.6471 0.7857 0.7097 14
PASTSURGICAL 1.0000 0.8571 0.9231 7
PLAN 0.0000 0.0000 0.0000 1
PROCEDURES 0.0000 0.0000 0.0000 1
ROS 0.8462 0.6471 0.7333 17
accuracy 0.7250 200
macro avg 0.4598 0.5204 0.4614 200
weighted avg 0.6618 0.7250 0.6812 200
|
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
fbellame/confoo-train-llama-style-1-1 | fbellame | 2024-03-10T20:33:00Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"conversational",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-10T19:06:14Z | ---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
---
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` library installed.
```bash
pip install transformers==4.36.1
```
Also make sure you are providing your huggingface token to the pipeline if the model is lying in a private repo.
- Either leave `token=True` in the `pipeline` and login to hugginface_hub by running
```python
import huggingface_hub
huggingface_hub.login(<ACCESS_TOKEN>)
```
- Or directly pass your <ACCESS_TOKEN> to `token` in the `pipeline`
```python
from transformers import pipeline
generate_text = pipeline(
model="fbellame/confoo-train-llama-style-1-1",
torch_dtype="auto",
trust_remote_code=True,
use_fast=True,
device_map={"": "cuda:0"},
token=True,
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.0),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
Why is drinking water so healthy?</s>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`.
```python
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"fbellame/confoo-train-llama-style-1-1",
use_fast=True,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"fbellame/confoo-train-llama-style-1-1",
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.0),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "fbellame/confoo-train-llama-style-1-1" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "How are you?</s>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=True,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.0),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Quantization and sharding
You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
## Model Architecture
```
MistralForCausalLM(
(model): MistralModel(
(embed_tokens): Embedding(32000, 4096, padding_idx=0)
(layers): ModuleList(
(0-31): 32 x MistralDecoderLayer(
(self_attn): MistralAttention(
(q_proj): Linear(in_features=4096, out_features=4096, bias=False)
(k_proj): Linear(in_features=4096, out_features=1024, bias=False)
(v_proj): Linear(in_features=4096, out_features=1024, bias=False)
(o_proj): Linear(in_features=4096, out_features=4096, bias=False)
(rotary_emb): MistralRotaryEmbedding()
)
(mlp): MistralMLP(
(gate_proj): Linear(in_features=4096, out_features=14336, bias=False)
(up_proj): Linear(in_features=4096, out_features=14336, bias=False)
(down_proj): Linear(in_features=14336, out_features=4096, bias=False)
(act_fn): SiLU()
)
(input_layernorm): MistralRMSNorm()
(post_attention_layernorm): MistralRMSNorm()
)
)
(norm): MistralRMSNorm()
)
(lm_head): Linear(in_features=4096, out_features=32000, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it. |
jairNeto/bert-finetuned-sem_eval-english | jairNeto | 2024-03-10T20:24:14Z | 94 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:Jairnetojp/content-moderation",
"base_model:finetune:Jairnetojp/content-moderation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-10T20:23:02Z | ---
base_model: Jairnetojp/content-moderation
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: bert-finetuned-sem_eval-english
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-sem_eval-english
This model is a fine-tuned version of [Jairnetojp/content-moderation](https://huggingface.co/Jairnetojp/content-moderation) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2343
- F1: 0.5458
- Roc Auc: 0.7829
- Accuracy: 0.4655
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| No log | 1.0 | 232 | 0.2283 | 0.5361 | 0.7829 | 0.4503 |
| No log | 2.0 | 464 | 0.2343 | 0.5458 | 0.7829 | 0.4655 |
| 0.069 | 3.0 | 696 | 0.2461 | 0.5392 | 0.7832 | 0.4544 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Balassar/balassarprofile | Balassar | 2024-03-10T20:18:41Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-03-10T20:10:46Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: TheBloke/Mistral-7B-Instruct-v0.2-GPTQ
model-index:
- name: balassarprofile
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# balassarprofile
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8042
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8802 | 1.0 | 1 | 3.6270 |
| 3.861 | 2.0 | 2 | 3.5746 |
| 3.7758 | 3.0 | 3 | 3.4416 |
| 3.5819 | 4.0 | 4 | 3.3048 |
| 3.3879 | 5.0 | 5 | 3.1740 |
| 3.2106 | 6.0 | 6 | 3.0575 |
| 3.0652 | 7.0 | 7 | 2.9588 |
| 2.94 | 8.0 | 8 | 2.8822 |
| 2.8566 | 9.0 | 9 | 2.8301 |
| 2.7926 | 10.0 | 10 | 2.8042 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
Turkgamercat/Muslera | Turkgamercat | 2024-03-10T20:17:50Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stablediffusionapi/icon-lab-test-ai",
"base_model:adapter:stablediffusionapi/icon-lab-test-ai",
"region:us"
] | text-to-image | 2024-03-10T20:13:55Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/1000012898.jpg
base_model: stablediffusionapi/icon-lab-test-ai
instance_prompt: null
---
# Muslera
<Gallery />
## Download model
[Download](/Turkgamercat/Muslera/tree/main) them in the Files & versions tab.
|
xKizzi/taxirepo | xKizzi | 2024-03-10T20:13:31Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-10T20:13:29Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxirepo
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.34 +/- 2.46
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="xKizzi/taxirepo", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
deepnet/SN6-67M4 | deepnet | 2024-03-10T20:06:25Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T20:00:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
eduvedras/pix2struct-textcaps-base-desc-vars-final | eduvedras | 2024-03-10T20:05:13Z | 35 | 0 | transformers | [
"transformers",
"safetensors",
"pix2struct",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-03-10T19:03:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Subsets and Splits