modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-13 12:28:20
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 518
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-13 12:26:25
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ShynBui/ppo-LunarLander-v2 | ShynBui | 2024-03-03T21:47:31Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-03-03T21:47:12Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 258.22 +/- 15.47
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
balnazzar/Frankie-tiny | balnazzar | 2024-03-03T21:44:09Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Locutusque/TinyMistral-248M-v2-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-03T21:43:29Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- Locutusque/TinyMistral-248M-v2-Instruct
- Locutusque/TinyMistral-248M-v2-Instruct
---
# Frankie-tiny
Frankie-tiny is a (franken)merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [Locutusque/TinyMistral-248M-v2-Instruct](https://huggingface.co/Locutusque/TinyMistral-248M-v2-Instruct)
* [Locutusque/TinyMistral-248M-v2-Instruct](https://huggingface.co/Locutusque/TinyMistral-248M-v2-Instruct)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Locutusque/TinyMistral-248M-v2-Instruct
layer_range: [0, 12]
- sources:
- model: Locutusque/TinyMistral-248M-v2-Instruct
layer_range: [8, 12]
merge_method: passthrough
dtype: float16
``` |
hotsuyuki/gpt_1.3B_global_step2000_zero-1_dp-4_pp-2_tp-2_flashattn2-on | hotsuyuki | 2024-03-03T21:40:55Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-03T21:37:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ravp1/ppo-Huggy | ravp1 | 2024-03-03T21:29:53Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2024-03-03T21:29:47Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ravp1/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
itsfalc/fine-tuned-wav2vec2 | itsfalc | 2024-03-03T21:29:16Z | 3 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2024-03-03T20:50:29Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: itsfalc/output_directory
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# itsfalc/output_directory
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 28.4373
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 40
- training_steps: 80
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.6 | 20 | 47.3619 | 1.0007 |
| 52.5055 | 3.2 | 40 | 28.4373 | 1.0 |
| 31.5642 | 4.8 | 60 | 21.6139 | 1.0 |
| 31.5642 | 6.4 | 80 | 20.5592 | 1.0 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
salohnana2018/ABSA-SentencePair-DAPT-HARDAR-bert-base-Camel-MSA-ru3 | salohnana2018 | 2024-03-03T21:28:28Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:salohnana2018/CAMEL-BERT-MSA-domianAdaption-Single-ABSA-HARD",
"base_model:finetune:salohnana2018/CAMEL-BERT-MSA-domianAdaption-Single-ABSA-HARD",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-03-03T20:36:09Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
base_model: salohnana2018/CAMEL-BERT-MSA-domianAdaption-Single-ABSA-HARD
model-index:
- name: ABSA-SentencePair-DAPT-HARDAR-bert-base-Camel-MSA-ru3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ABSA-SentencePair-DAPT-HARDAR-bert-base-Camel-MSA-ru3
This model is a fine-tuned version of [salohnana2018/CAMEL-BERT-MSA-domianAdaption-Single-ABSA-HARD](https://huggingface.co/salohnana2018/CAMEL-BERT-MSA-domianAdaption-Single-ABSA-HARD) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1055
- Accuracy: 0.9026
- F1: 0.9026
- Precision: 0.9026
- Recall: 0.9026
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 23
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.0585 | 1.0 | 265 | 0.0424 | 0.8497 | 0.8497 | 0.8497 | 0.8497 |
| 0.0377 | 2.0 | 530 | 0.0422 | 0.8861 | 0.8861 | 0.8861 | 0.8861 |
| 0.0259 | 3.0 | 795 | 0.0421 | 0.8819 | 0.8819 | 0.8819 | 0.8819 |
| 0.017 | 4.0 | 1060 | 0.0477 | 0.8899 | 0.8899 | 0.8899 | 0.8899 |
| 0.0108 | 5.0 | 1325 | 0.0582 | 0.8927 | 0.8927 | 0.8927 | 0.8927 |
| 0.0085 | 6.0 | 1590 | 0.0650 | 0.8899 | 0.8899 | 0.8899 | 0.8899 |
| 0.0063 | 7.0 | 1855 | 0.0680 | 0.8922 | 0.8922 | 0.8922 | 0.8922 |
| 0.0043 | 8.0 | 2120 | 0.0705 | 0.9003 | 0.9003 | 0.9003 | 0.9003 |
| 0.0042 | 9.0 | 2385 | 0.0711 | 0.8974 | 0.8974 | 0.8974 | 0.8974 |
| 0.003 | 10.0 | 2650 | 0.0773 | 0.8979 | 0.8979 | 0.8979 | 0.8979 |
| 0.0026 | 11.0 | 2915 | 0.0842 | 0.8965 | 0.8965 | 0.8965 | 0.8965 |
| 0.002 | 12.0 | 3180 | 0.0888 | 0.8979 | 0.8979 | 0.8979 | 0.8979 |
| 0.002 | 13.0 | 3445 | 0.0896 | 0.8970 | 0.8970 | 0.8970 | 0.8970 |
| 0.0015 | 14.0 | 3710 | 0.0930 | 0.9008 | 0.9008 | 0.9008 | 0.9008 |
| 0.0012 | 15.0 | 3975 | 0.1008 | 0.9003 | 0.9003 | 0.9003 | 0.9003 |
| 0.0015 | 16.0 | 4240 | 0.0987 | 0.9017 | 0.9017 | 0.9017 | 0.9017 |
| 0.0011 | 17.0 | 4505 | 0.1030 | 0.9017 | 0.9017 | 0.9017 | 0.9017 |
| 0.0011 | 18.0 | 4770 | 0.1051 | 0.9003 | 0.9003 | 0.9003 | 0.9003 |
| 0.0011 | 19.0 | 5035 | 0.1046 | 0.9036 | 0.9036 | 0.9036 | 0.9036 |
| 0.001 | 20.0 | 5300 | 0.1055 | 0.9026 | 0.9026 | 0.9026 | 0.9026 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
furrutiav/bert_qa_extractor_cockatiel_2022_best_both_ef_plus_nllf_z_value_over_subsample_it_630 | furrutiav | 2024-03-03T21:23:39Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2024-03-03T21:23:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Phudish/Merged_imdb | Phudish | 2024-03-03T21:01:20Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-classification",
"mergekit",
"merge",
"arxiv:2203.05482",
"base_model:Phudish/imdb_finetune_epoch_1_gpt2",
"base_model:merge:Phudish/imdb_finetune_epoch_1_gpt2",
"base_model:Phudish/imdb_finetune_epoch_5_gpt2",
"base_model:merge:Phudish/imdb_finetune_epoch_5_gpt2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-03-03T21:01:05Z | ---
base_model:
- Phudish/imdb_finetune_epoch_1_gpt2
- Phudish/imdb_finetune_epoch_5_gpt2
library_name: transformers
tags:
- mergekit
- merge
---
# finetune_gpt2_merged_base
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [Phudish/imdb_finetune_epoch_1_gpt2](https://huggingface.co/Phudish/imdb_finetune_epoch_1_gpt2)
* [Phudish/imdb_finetune_epoch_5_gpt2](https://huggingface.co/Phudish/imdb_finetune_epoch_5_gpt2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: float16
merge_method: linear
slices:
- sources:
- layer_range: [0, 12]
model:
model:
path: Phudish/imdb_finetune_epoch_1_gpt2
parameters:
weight: 1.0
- layer_range: [0, 12]
model:
model:
path: Phudish/imdb_finetune_epoch_5_gpt2
parameters:
weight: 1.0
```
|
abritez/distilbert-base-uncased-finetuned-ner | abritez | 2024-03-03T20:52:36Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2024-03-01T20:54:15Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the CoNLL 2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0599
- Precision: 0.9240
- Recall: 0.9356
- F1: 0.9297
- Accuracy: 0.9837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2479 | 1.0 | 878 | 0.0702 | 0.9011 | 0.9178 | 0.9094 | 0.9796 |
| 0.0518 | 2.0 | 1756 | 0.0607 | 0.9200 | 0.9306 | 0.9253 | 0.9826 |
| 0.031 | 3.0 | 2634 | 0.0599 | 0.9240 | 0.9356 | 0.9297 | 0.9837 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
wandb/gemma-2b-zephyr-dpo | wandb | 2024-03-03T20:50:46Z | 20 | 2 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:wandb/gemma-2b-zephyr-sft",
"base_model:finetune:wandb/gemma-2b-zephyr-sft",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-29T08:40:28Z | ---
license: other
library_name: transformers
datasets:
- HuggingFaceH4/ultrafeedback_binarized
base_model: wandb/gemma-2b-zephyr-sft
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
---
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/llm_surgery/gemma-zephyr)
# Gemma 2B Zephyr DPO
The [Zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) DPO recipe applied on top of SFT finetuned Gemma 2B
## Model description
- **Model type:** A 8.5B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
- **Language(s) (NLP):** Primarily English
- **Finetuned from model:** [wandb/gemma-2b-zephyr-sft](https://huggingface.co/wandb/gemma-2b-zephyr-sft/)
## Recipe
We trained using the DPO script in [alignment handbook recipe](https://github.com/huggingface/alignment-handbook/blob/main/scripts/run_dpo.py) and logging to W&B
Visit the [W&B workspace here](https://wandb.ai/llm_surgery/gemma-zephyr?nw=nwusercapecape)
## License
This model has the same license as the [original Gemma model collection](https://ai.google.dev/gemma/terms)
## Compute provided by [Lambda Labs](https://lambdalabs.com/) - 8xA100 80GB node
around 13 hours of training
|
azizksar/train_mistral_v3 | azizksar | 2024-03-03T20:45:27Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
]
| null | 2024-03-03T20:45:25Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: train_mistral_v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_mistral_v3
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8274
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1783 | 10.0 | 240 | 1.8274 |
### Framework versions
- PEFT 0.9.1.dev0
- Transformers 4.39.0.dev0
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.2 |
Caesaripse/mistral7binstruct_summarize | Caesaripse | 2024-03-03T20:44:04Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
]
| null | 2024-03-03T20:44:01Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: mistralai/Mistral-7B-Instruct-v0.2
model-index:
- name: mistral7binstruct_summarize
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral7binstruct_summarize
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7382 | 0.22 | 25 | 1.5653 |
| 1.529 | 0.43 | 50 | 1.4778 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
Sachioster/mistral-ft | Sachioster | 2024-03-03T20:43:25Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"license:apache-2.0",
"region:us"
]
| null | 2024-03-03T20:43:22Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: TheBloke/Mistral-7B-Instruct-v0.2-GPTQ
model-index:
- name: mistral-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-ft
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9210
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9822 | 0.99 | 42 | 1.5521 |
| 1.4408 | 1.99 | 84 | 1.3540 |
| 1.2501 | 2.98 | 126 | 1.2406 |
| 1.0744 | 4.0 | 169 | 1.1296 |
| 0.9751 | 4.99 | 211 | 1.0512 |
| 0.8817 | 5.99 | 253 | 0.9931 |
| 0.8158 | 6.98 | 295 | 0.9602 |
| 0.7383 | 8.0 | 338 | 0.9393 |
| 0.7179 | 8.99 | 380 | 0.9236 |
| 0.6866 | 9.94 | 420 | 0.9210 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
ceofast/emotiom_analysis_with_distilbert | ceofast | 2024-03-03T20:37:12Z | 3 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-03-03T20:00:15Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: ceofast/emotiom_analysis_with_distilbert
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ceofast/emotiom_analysis_with_distilbert
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1305
- Validation Loss: 0.1559
- Train Accuracy: 0.932
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.3838 | 0.1456 | 0.936 | 0 |
| 0.1305 | 0.1559 | 0.932 | 1 |
### Framework versions
- Transformers 4.37.2
- TensorFlow 2.10.1
- Datasets 2.17.0
- Tokenizers 0.15.1
|
bofenghuang/whisper-large-v3-french-distil-dec8 | bofenghuang | 2024-03-03T20:29:07Z | 491 | 3 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"fr",
"dataset:mozilla-foundation/common_voice_13_0",
"dataset:facebook/multilingual_librispeech",
"dataset:facebook/voxpopuli",
"dataset:google/fleurs",
"dataset:gigant/african_accented_french",
"arxiv:2311.00430",
"arxiv:2212.04356",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-12-25T12:49:51Z | ---
license: mit
language: fr
library_name: transformers
pipeline_tag: automatic-speech-recognition
thumbnail: null
tags:
- automatic-speech-recognition
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_13_0
- facebook/multilingual_librispeech
- facebook/voxpopuli
- google/fleurs
- gigant/african_accented_french
metrics:
- wer
model-index:
- name: whisper-large-v3-french-distil-dec8
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13.0
type: mozilla-foundation/common_voice_13_0
config: fr
split: test
args:
language: fr
metrics:
- name: WER
type: wer
value: 7.62
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Multilingual LibriSpeech (MLS)
type: facebook/multilingual_librispeech
config: french
split: test
args:
language: fr
metrics:
- name: WER
type: wer
value: 3.80
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: VoxPopuli
type: facebook/voxpopuli
config: fr
split: test
args:
language: fr
metrics:
- name: WER
type: wer
value: 8.85
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Fleurs
type: google/fleurs
config: fr_fr
split: test
args:
language: fr
metrics:
- name: WER
type: wer
value: 5.40
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: African Accented French
type: gigant/african_accented_french
config: fr
split: test
args:
language: fr
metrics:
- name: WER
type: wer
value: 4.18
---
# Whisper-Large-V3-French-Distil-Dec8
Whisper-Large-V3-French-Distil represents a series of distilled versions of [Whisper-Large-V3-French](https://huggingface.co/bofenghuang/whisper-large-v3-french), achieved by reducing the number of decoder layers from 32 to 16, 8, 4, or 2 and distilling using a large-scale dataset, as outlined in this [paper](https://arxiv.org/abs/2311.00430).
The distilled variants reduce memory usage and inference time while maintaining performance (based on the retained number of layers) and mitigating the risk of hallucinations, particularly in long-form transcriptions. Moreover, they can be seamlessly combined with the original Whisper-Large-V3-French model for speculative decoding, resulting in improved inference speed and consistent outputs compared to using the standalone model.
This model has been converted into various formats, facilitating its usage across different libraries, including transformers, openai-whisper, fasterwhisper, whisper.cpp, candle, mlx, etc.
## Table of Contents
- [Performance](#performance)
- [Usage](#usage)
- [Hugging Face Pipeline](#hugging-face-pipeline)
- [Hugging Face Low-level APIs](#hugging-face-low-level-apis)
- [Speculative Decoding](#speculative-decoding)
- [OpenAI Whisper](#openai-whisper)
- [Faster Whisper](#faster-whisper)
- [Whisper.cpp](#whispercpp)
- [Candle](#candle)
- [MLX](#mlx)
- [Training details](#training-details)
- [Acknowledgements](#acknowledgements)
## Performance
We evaluated our model on both short and long-form transcriptions, and also tested it on both in-distribution and out-of-distribution datasets to conduct a comprehensive analysis assessing its accuracy, generalizability, and robustness.
Please note that the reported WER is the result after converting numbers to text, removing punctuation (except for apostrophes and hyphens), and converting all characters to lowercase.
All evaluation results on the public datasets can be found [here](https://drive.google.com/drive/folders/1rFIh6yXRVa9RZ0ieZoKiThFZgQ4STPPI?usp=drive_link).
### Short-Form Transcription

Due to the lack of readily available out-of-domain (OOD) and long-form test sets in French, we evaluated using internal test sets from [Zaion Lab](https://zaion.ai/). These sets comprise human-annotated audio-transcription pairs from call center conversations, which are notable for their significant background noise and domain-specific terminology.
### Long-Form Transcription

The long-form transcription was run using the 🤗 Hugging Face pipeline for quicker evaluation. Audio files were segmented into 30-second chunks and processed in parallel.
## Usage
### Hugging Face Pipeline
The model can easily used with the 🤗 Hugging Face [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline) class for audio transcription.
For long-form transcription (> 30 seconds), you can activate the process by passing the `chunk_length_s` argument. This approach segments the audio into smaller segments, processes them in parallel, and then joins them at the strides by finding the longest common sequence. While this chunked long-form approach may have a slight compromise in performance compared to OpenAI's sequential algorithm, it provides 9x faster inference speed.
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
# Load model
model_name_or_path = "bofenghuang/whisper-large-v3-french-distil-dec8"
processor = AutoProcessor.from_pretrained(model_name_or_path)
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_name_or_path,
torch_dtype=torch_dtype,
low_cpu_mem_usage=True,
)
model.to(device)
# Init pipeline
pipe = pipeline(
"automatic-speech-recognition",
model=model,
feature_extractor=processor.feature_extractor,
tokenizer=processor.tokenizer,
torch_dtype=torch_dtype,
device=device,
# chunk_length_s=30, # for long-form transcription
max_new_tokens=128,
)
# Example audio
dataset = load_dataset("bofenghuang/asr-dummy", "fr", split="test")
sample = dataset[0]["audio"]
# Run pipeline
result = pipe(sample)
print(result["text"])
```
### Hugging Face Low-level APIs
You can also use the 🤗 Hugging Face low-level APIs for transcription, offering greater control over the process, as demonstrated below:
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
# Load model
model_name_or_path = "bofenghuang/whisper-large-v3-french-distil-dec8"
processor = AutoProcessor.from_pretrained(model_name_or_path)
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_name_or_path,
torch_dtype=torch_dtype,
low_cpu_mem_usage=True,
)
model.to(device)
# Example audio
dataset = load_dataset("bofenghuang/asr-dummy", "fr", split="test")
sample = dataset[0]["audio"]
# Extract feautres
input_features = processor(
sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt"
).input_features
# Generate tokens
predicted_ids = model.generate(
input_features.to(dtype=torch_dtype).to(device), max_new_tokens=128
)
# Detokenize to text
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)[0]
print(transcription)
```
### Speculative Decoding
[Speculative decoding](https://huggingface.co/blog/whisper-speculative-decoding) can be achieved using a draft model, essentially a distilled version of Whisper. This approach guarantees identical outputs to using the main Whisper model alone, offers a 2x faster inference speed, and incurs only a slight increase in memory overhead.
Since the distilled Whisper has the same encoder as the original, only its decoder need to be loaded, and encoder outputs are shared between the main and draft models during inference.
Using speculative decoding with the Hugging Face pipeline is simple - just specify the `assistant_model` within the generation configurations.
```python
import torch
from datasets import load_dataset
from transformers import (
AutoModelForCausalLM,
AutoModelForSpeechSeq2Seq,
AutoProcessor,
pipeline,
)
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
# Load model
model_name_or_path = "bofenghuang/whisper-large-v3-french"
processor = AutoProcessor.from_pretrained(model_name_or_path)
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_name_or_path,
torch_dtype=torch_dtype,
low_cpu_mem_usage=True,
)
model.to(device)
# Load draft model
assistant_model_name_or_path = "bofenghuang/whisper-large-v3-french-distil-dec2"
assistant_model = AutoModelForCausalLM.from_pretrained(
assistant_model_name_or_path,
torch_dtype=torch_dtype,
low_cpu_mem_usage=True,
)
assistant_model.to(device)
# Init pipeline
pipe = pipeline(
"automatic-speech-recognition",
model=model,
feature_extractor=processor.feature_extractor,
tokenizer=processor.tokenizer,
torch_dtype=torch_dtype,
device=device,
generate_kwargs={"assistant_model": assistant_model},
max_new_tokens=128,
)
# Example audio
dataset = load_dataset("bofenghuang/asr-dummy", "fr", split="test")
sample = dataset[0]["audio"]
# Run pipeline
result = pipe(sample)
print(result["text"])
```
### OpenAI Whisper
You can also employ the sequential long-form decoding algorithm with a sliding window and temperature fallback, as outlined by OpenAI in their original [paper](https://arxiv.org/abs/2212.04356).
First, install the [openai-whisper](https://github.com/openai/whisper) package:
```bash
pip install -U openai-whisper
```
Then, download the converted model:
```bash
python -c "from huggingface_hub import hf_hub_download; hf_hub_download(repo_id='bofenghuang/whisper-large-v3-french-distil-dec8', filename='original_model.pt', local_dir='./models/whisper-large-v3-french-distil-dec8')"
```
Now, you can transcirbe audio files by following the usage instructions provided in the repository:
```python
import whisper
from datasets import load_dataset
# Load model
model = whisper.load_model("./models/whisper-large-v3-french-distil-dec8/original_model.pt")
# Example audio
dataset = load_dataset("bofenghuang/asr-dummy", "fr", split="test")
sample = dataset[0]["audio"]["array"].astype("float32")
# Transcribe
result = model.transcribe(sample, language="fr")
print(result["text"])
```
### Faster Whisper
Faster Whisper is a reimplementation of OpenAI's Whisper models and the sequential long-form decoding algorithm in the [CTranslate2](https://github.com/OpenNMT/CTranslate2) format.
Compared to openai-whisper, it offers up to 4x faster inference speed, while consuming less memory. Additionally, the model can be quantized into int8, further enhancing its efficiency on both CPU and GPU.
First, install the [faster-whisper](https://github.com/SYSTRAN/faster-whisper) package:
```bash
pip install faster-whisper
```
Then, download the model converted to the CTranslate2 format:
```bash
python -c "from huggingface_hub import snapshot_download; snapshot_download(repo_id='bofenghuang/whisper-large-v3-french-distil-dec8', local_dir='./models/whisper-large-v3-french-distil-dec8', allow_patterns='ctranslate2/*')"
```
Now, you can transcirbe audio files by following the usage instructions provided in the repository:
```python
from datasets import load_dataset
from faster_whisper import WhisperModel
# Load model
model = WhisperModel("./models/whisper-large-v3-french-distil-dec8/ctranslate2", device="cuda", compute_type="float16") # Run on GPU with FP16
# Example audio
dataset = load_dataset("bofenghuang/asr-dummy", "fr", split="test")
sample = dataset[0]["audio"]["array"].astype("float32")
segments, info = model.transcribe(sample, beam_size=5, language="fr")
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
### Whisper.cpp
Whisper.cpp is a reimplementation of OpenAI's Whisper models, crafted in plain C/C++ without any dependencies. It offers compatibility with various backends and platforms.
Additionally, the model can be quantized to either 4-bit or 5-bit integers, further enhancing its efficiency.
First, clone and build the [whisper.cpp](https://github.com/ggerganov/whisper.cpp) repository:
```bash
git clone https://github.com/ggerganov/whisper.cpp.git
cd whisper.cpp
# build the main example
make
```
Next, download the converted ggml weights from the Hugging Face Hub:
```bash
# Download model quantized with Q5_0 method
python -c "from huggingface_hub import hf_hub_download; hf_hub_download(repo_id='bofenghuang/whisper-large-v3-french-distil-dec8', filename='ggml-model-q5_0.bin', local_dir='./models/whisper-large-v3-french-distil-dec8')"
```
Now, you can transcribe an audio file using the following command:
```bash
./main -m ./models/whisper-large-v3-french-distil-dec8/ggml-model-q5_0.bin -l fr -f /path/to/audio/file --print-colors
```
### Candle
[Candle-whisper](https://github.com/huggingface/candle/tree/main/candle-examples/examples/whisper) is a reimplementation of OpenAI's Whisper models in the candle format - a lightweight ML framework built in Rust.
First, clone the [candle](https://github.com/huggingface/candle) repository:
```bash
git clone https://github.com/huggingface/candle.git
cd candle/candle-examples/examples/whisper
```
Transcribe an audio file using the following command:
```bash
cargo run --example whisper --release -- --model large-v3 --model-id bofenghuang/whisper-large-v3-french-distil-dec8 --language fr --input /path/to/audio/file
```
In order to use CUDA add `--features cuda` to the example command line:
```bash
cargo run --example whisper --release --features cuda -- --model large-v3 --model-id bofenghuang/whisper-large-v3-french-distil-dec8 --language fr --input /path/to/audio/file
```
### MLX
[MLX-Whisper](https://github.com/ml-explore/mlx-examples/tree/main/whisper) is a reimplementation of OpenAI's Whisper models in the [MLX](https://github.com/ml-explore/mlx) format - a ML framework on Apple silicon. It supports features like lazy computation, unified memory management, etc.
First, clone the [MLX Examples](https://github.com/ml-explore/mlx-examples) repository:
```bash
git clone https://github.com/ml-explore/mlx-examples.git
cd mlx-examples/whisper
```
Next, install the dependencies:
```bash
pip install -r requirements.txt
```
Download the pytorch checkpoint in the original OpenAI format and convert it into MLX format (We haven't included the converted version here since the repository is already heavy and the conversion is very fast):
```bash
# Download
python -c "from huggingface_hub import hf_hub_download; hf_hub_download(repo_id='bofenghuang/whisper-large-v3-french-distil-dec8', filename='original_model.pt', local_dir='./models/whisper-large-v3-french-distil-dec8')"
# Convert into .npz
python convert.py --torch-name-or-path ./models/whisper-large-v3-french-distil-dec8/original_model.pt --mlx-path ./mlx_models/whisper-large-v3-french-distil-dec8
```
Now, you can transcribe audio with:
```python
import whisper
result = whisper.transcribe("/path/to/audio/file", path_or_hf_repo="mlx_models/whisper-large-v3-french-distil-dec8", language="fr")
print(result["text"])
```
## Training details
We've collected a composite dataset consisting of over 2,500 hours of French speech recognition data, which incldues datasets such as [Common Voice 13.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0), [Multilingual LibriSpeech](https://huggingface.co/datasets/facebook/multilingual_librispeech), [Voxpopuli](https://huggingface.co/datasets/facebook/voxpopuli), [Fleurs](https://huggingface.co/datasets/google/fleurs), [Multilingual TEDx](https://www.openslr.org/100/), [MediaSpeech](https://www.openslr.org/108/), [African Accented French](https://huggingface.co/datasets/gigant/african_accented_french), etc.
Given that some datasets, like MLS, only offer text without case or punctuation, we employed a customized version of 🤗 [Speechbox](https://github.com/huggingface/speechbox) to restore case and punctuation from a limited set of symbols using the [bofenghuang/whisper-large-v2-cv11-french](bofenghuang/whisper-large-v2-cv11-french) model.
However, even within these datasets, we observed certain quality issues. These ranged from mismatches between audio and transcription in terms of language or content, poorly segmented utterances, to missing words in scripted speech, etc. We've built a pipeline to filter out many of these problematic utterances, aiming to enhance the dataset's quality. As a result, we excluded more than 10% of the data, and when we retrained the model, we noticed a significant reduction of hallucination.
For training, we employed the [script](https://github.com/huggingface/distil-whisper/blob/main/training/run_distillation.py) available in the 🤗 Distil-Whisper repository. The model training took place on the [Jean-Zay supercomputer](http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html) at GENCI, and we extend our gratitude to the IDRIS team for their responsive support throughout the project.
## Acknowledgements
- OpenAI for creating and open-sourcing the [Whisper model](https://arxiv.org/abs/2212.04356)
- 🤗 Hugging Face for integrating the Whisper model and providing the training codebase within the [Transformers](https://github.com/huggingface/transformers) and [Distil-Whisper](https://github.com/huggingface/distil-whisper) repository
- [Genci](https://genci.fr/) for their generous contribution of GPU hours to this project
|
furrutiav/bert_qa_extractor_cockatiel_2022_best_both_ef_plus_nllf_v0_z_value_it_505 | furrutiav | 2024-03-03T20:28:24Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2024-03-03T20:27:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
a-h-m-e-d/mistral_7b-instruct-guanaco | a-h-m-e-d | 2024-03-03T20:27:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-02-26T14:59:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
salohnana2018/ABSA-SentencePair-DAPT-HARDAR-bert-base-Camel-MSA-ru2 | salohnana2018 | 2024-03-03T20:22:26Z | 3 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:salohnana2018/CAMEL-BERT-MSA-domianAdaption-Single-ABSA-HARD",
"base_model:finetune:salohnana2018/CAMEL-BERT-MSA-domianAdaption-Single-ABSA-HARD",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-03-03T19:31:34Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
base_model: salohnana2018/CAMEL-BERT-MSA-domianAdaption-Single-ABSA-HARD
model-index:
- name: ABSA-SentencePair-DAPT-HARDAR-bert-base-Camel-MSA-ru2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ABSA-SentencePair-DAPT-HARDAR-bert-base-Camel-MSA-ru2
This model is a fine-tuned version of [salohnana2018/CAMEL-BERT-MSA-domianAdaption-Single-ABSA-HARD](https://huggingface.co/salohnana2018/CAMEL-BERT-MSA-domianAdaption-Single-ABSA-HARD) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1140
- Accuracy: 0.8956
- F1: 0.8956
- Precision: 0.8956
- Recall: 0.8956
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.0556 | 1.0 | 265 | 0.0421 | 0.8842 | 0.8842 | 0.8842 | 0.8842 |
| 0.0372 | 2.0 | 530 | 0.0368 | 0.8828 | 0.8828 | 0.8828 | 0.8828 |
| 0.0231 | 3.0 | 795 | 0.0426 | 0.8828 | 0.8828 | 0.8828 | 0.8828 |
| 0.0145 | 4.0 | 1060 | 0.0601 | 0.8809 | 0.8809 | 0.8809 | 0.8809 |
| 0.0101 | 5.0 | 1325 | 0.0573 | 0.8842 | 0.8842 | 0.8842 | 0.8842 |
| 0.0076 | 6.0 | 1590 | 0.0621 | 0.8856 | 0.8856 | 0.8856 | 0.8856 |
| 0.0051 | 7.0 | 1855 | 0.0621 | 0.8866 | 0.8866 | 0.8866 | 0.8866 |
| 0.0044 | 8.0 | 2120 | 0.0709 | 0.8899 | 0.8899 | 0.8899 | 0.8899 |
| 0.0035 | 9.0 | 2385 | 0.0827 | 0.8899 | 0.8899 | 0.8899 | 0.8899 |
| 0.0028 | 10.0 | 2650 | 0.0895 | 0.8946 | 0.8946 | 0.8946 | 0.8946 |
| 0.0024 | 11.0 | 2915 | 0.0859 | 0.8908 | 0.8908 | 0.8908 | 0.8908 |
| 0.0021 | 12.0 | 3180 | 0.0897 | 0.8847 | 0.8847 | 0.8847 | 0.8847 |
| 0.0017 | 13.0 | 3445 | 0.0994 | 0.8989 | 0.8989 | 0.8989 | 0.8989 |
| 0.0014 | 14.0 | 3710 | 0.1056 | 0.8937 | 0.8937 | 0.8937 | 0.8937 |
| 0.0014 | 15.0 | 3975 | 0.1044 | 0.8941 | 0.8941 | 0.8941 | 0.8941 |
| 0.0012 | 16.0 | 4240 | 0.1105 | 0.8951 | 0.8951 | 0.8951 | 0.8951 |
| 0.0012 | 17.0 | 4505 | 0.1119 | 0.8956 | 0.8956 | 0.8956 | 0.8956 |
| 0.0011 | 18.0 | 4770 | 0.1088 | 0.8965 | 0.8965 | 0.8965 | 0.8965 |
| 0.001 | 19.0 | 5035 | 0.1132 | 0.8979 | 0.8979 | 0.8979 | 0.8979 |
| 0.001 | 20.0 | 5300 | 0.1140 | 0.8956 | 0.8956 | 0.8956 | 0.8956 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
LarryAIDraw/ichinose_honamiV1 | LarryAIDraw | 2024-03-03T20:22:08Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2024-03-03T20:18:18Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/18363/honami-ichinose-cote |
core-3/kuno-royale-v2-7b-GGUF | core-3 | 2024-03-03T20:21:57Z | 3 | 2 | null | [
"gguf",
"base_model:core-3/kuno-royale-v2-7b",
"base_model:quantized:core-3/kuno-royale-v2-7b",
"license:cc-by-nc-4.0",
"region:us"
]
| null | 2024-03-01T18:06:43Z | ---
base_model: core-3/kuno-royale-v2-7b
inference: false
license: cc-by-nc-4.0
model_creator: core-3
model_name: kuno-royale-v2-7b
model_type: mistral
quantized_by: core-3
---
## kuno-royale-v2-7b-GGUF
Some GGUF quants of [core-3/kuno-royale-v2-7b](https://huggingface.co/core-3/kuno-royale-v2-7b) |
raoulmago/riconoscimento_documenti | raoulmago | 2024-03-03T19:55:01Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2024-03-03T19:02:23Z | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: riconoscimento_documenti
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# riconoscimento_documenti
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 1.9560 | 0.0 |
| No log | 2.0 | 3 | 1.4216 | 0.7375 |
| No log | 3.0 | 5 | 0.6008 | 1.0 |
| No log | 4.0 | 6 | 0.2696 | 1.0 |
| No log | 5.0 | 7 | 0.0996 | 1.0 |
| No log | 6.0 | 9 | 0.0089 | 1.0 |
| 0.4721 | 7.0 | 11 | 0.0011 | 1.0 |
| 0.4721 | 8.0 | 12 | 0.0005 | 1.0 |
| 0.4721 | 9.0 | 13 | 0.0002 | 1.0 |
| 0.4721 | 10.0 | 15 | 0.0001 | 1.0 |
| 0.4721 | 11.0 | 17 | 0.0000 | 1.0 |
| 0.4721 | 12.0 | 18 | 0.0000 | 1.0 |
| 0.4721 | 13.0 | 19 | 0.0000 | 1.0 |
| 0.0003 | 14.0 | 21 | 0.0000 | 1.0 |
| 0.0003 | 15.0 | 23 | 0.0000 | 1.0 |
| 0.0003 | 16.0 | 24 | 0.0000 | 1.0 |
| 0.0003 | 17.0 | 25 | 0.0000 | 1.0 |
| 0.0003 | 18.0 | 27 | 0.0000 | 1.0 |
| 0.0003 | 19.0 | 29 | 0.0000 | 1.0 |
| 0.0 | 20.0 | 30 | 0.0000 | 1.0 |
| 0.0 | 21.0 | 31 | 0.0000 | 1.0 |
| 0.0 | 22.0 | 33 | 0.0000 | 1.0 |
| 0.0 | 23.0 | 35 | 0.0000 | 1.0 |
| 0.0 | 24.0 | 36 | 0.0000 | 1.0 |
| 0.0 | 25.0 | 37 | 0.0000 | 1.0 |
| 0.0 | 26.0 | 39 | 0.0000 | 1.0 |
| 0.0 | 27.0 | 41 | 0.0000 | 1.0 |
| 0.0 | 28.0 | 42 | 0.0000 | 1.0 |
| 0.0 | 29.0 | 43 | 0.0000 | 1.0 |
| 0.0 | 30.0 | 45 | 0.0000 | 1.0 |
| 0.0 | 31.0 | 47 | 0.0000 | 1.0 |
| 0.0 | 32.0 | 48 | 0.0000 | 1.0 |
| 0.0 | 33.0 | 49 | 0.0000 | 1.0 |
| 0.0 | 33.33 | 50 | 0.0000 | 1.0 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
betajuned/gpt2original | betajuned | 2024-03-03T19:47:36Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"autotrain",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-03T17:54:33Z | ---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
Weni/ZeroShot-3.3.19-Mistral-7b-Multilanguage-3.2.0-merged | Weni | 2024-03-03T19:37:03Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-03T19:24:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
OmarHaroon01/Byt5_small_finetune_CL_ag_news | OmarHaroon01 | 2024-03-03T19:34:10Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-03-03T17:28:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
salohnana2018/ABSA-SentencePair-DAPT-HARDAR-bert-base-Camel-MSA-ru1 | salohnana2018 | 2024-03-03T19:24:06Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:salohnana2018/CAMEL-BERT-MSA-domianAdaption-Single-ABSA-HARD",
"base_model:finetune:salohnana2018/CAMEL-BERT-MSA-domianAdaption-Single-ABSA-HARD",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-03-03T18:35:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
base_model: salohnana2018/CAMEL-BERT-MSA-domianAdaption-Single-ABSA-HARD
model-index:
- name: ABSA-SentencePair-DAPT-HARDAR-bert-base-Camel-MSA-ru1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ABSA-SentencePair-DAPT-HARDAR-bert-base-Camel-MSA-ru1
This model is a fine-tuned version of [salohnana2018/CAMEL-BERT-MSA-domianAdaption-Single-ABSA-HARD](https://huggingface.co/salohnana2018/CAMEL-BERT-MSA-domianAdaption-Single-ABSA-HARD) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1067
- Accuracy: 0.8993
- F1: 0.8993
- Precision: 0.8993
- Recall: 0.8993
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 25
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.0591 | 1.0 | 265 | 0.0553 | 0.7925 | 0.7925 | 0.7925 | 0.7925 |
| 0.0352 | 2.0 | 530 | 0.0372 | 0.8932 | 0.8932 | 0.8932 | 0.8932 |
| 0.0229 | 3.0 | 795 | 0.0469 | 0.8866 | 0.8866 | 0.8866 | 0.8866 |
| 0.0146 | 4.0 | 1060 | 0.0533 | 0.8960 | 0.8960 | 0.8960 | 0.8960 |
| 0.0101 | 5.0 | 1325 | 0.0581 | 0.8970 | 0.8970 | 0.8970 | 0.8970 |
| 0.0074 | 6.0 | 1590 | 0.0631 | 0.8828 | 0.8828 | 0.8828 | 0.8828 |
| 0.0053 | 7.0 | 1855 | 0.0658 | 0.8823 | 0.8823 | 0.8823 | 0.8823 |
| 0.0051 | 8.0 | 2120 | 0.0723 | 0.8974 | 0.8974 | 0.8974 | 0.8974 |
| 0.0038 | 9.0 | 2385 | 0.0794 | 0.8913 | 0.8913 | 0.8913 | 0.8913 |
| 0.0028 | 10.0 | 2650 | 0.0755 | 0.8871 | 0.8871 | 0.8871 | 0.8871 |
| 0.0026 | 11.0 | 2915 | 0.0811 | 0.8894 | 0.8894 | 0.8894 | 0.8894 |
| 0.0019 | 12.0 | 3180 | 0.0853 | 0.8951 | 0.8951 | 0.8951 | 0.8951 |
| 0.0022 | 13.0 | 3445 | 0.0924 | 0.8861 | 0.8861 | 0.8861 | 0.8861 |
| 0.0018 | 14.0 | 3710 | 0.0898 | 0.8946 | 0.8946 | 0.8946 | 0.8946 |
| 0.0012 | 15.0 | 3975 | 0.0916 | 0.8856 | 0.8856 | 0.8856 | 0.8856 |
| 0.0013 | 16.0 | 4240 | 0.0999 | 0.8956 | 0.8956 | 0.8956 | 0.8956 |
| 0.0013 | 17.0 | 4505 | 0.1019 | 0.8922 | 0.8922 | 0.8922 | 0.8922 |
| 0.001 | 18.0 | 4770 | 0.1025 | 0.8979 | 0.8979 | 0.8979 | 0.8979 |
| 0.001 | 19.0 | 5035 | 0.1061 | 0.8998 | 0.8998 | 0.8998 | 0.8998 |
| 0.001 | 20.0 | 5300 | 0.1067 | 0.8993 | 0.8993 | 0.8993 | 0.8993 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
IrshadG/Qwen1.5-0.5B-Chat-finetune | IrshadG | 2024-03-03T19:15:28Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-03T19:15:08Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gorabbani/q-Taxi-v3 | gorabbani | 2024-03-03T19:08:36Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-03-03T19:08:34Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="gorabbani/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
cvzion/gguf-MINICPM-dqg-v03042024-rev2 | cvzion | 2024-03-03T19:05:19Z | 4 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:openbmb/MiniCPM-2B-sft-bf16-llama-format",
"base_model:quantized:openbmb/MiniCPM-2B-sft-bf16-llama-format",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2024-03-03T19:04:11Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: openbmb/MiniCPM-2B-sft-bf16-llama-format
---
# Uploaded model
- **Developed by:** cvzion
- **License:** apache-2.0
- **Finetuned from model :** openbmb/MiniCPM-2B-sft-bf16-llama-format
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
michimalek/mistral-7b-instruct-tokenizer-finetuned-peft | michimalek | 2024-03-03T19:04:13Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-03T19:04:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Theoreticallyhugo/bloom-full_labels | Theoreticallyhugo | 2024-03-03T18:52:10Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"bloom",
"token-classification",
"generated_from_trainer",
"dataset:essays_su_g",
"base_model:bigscience/bloom-560m",
"base_model:finetune:bigscience/bloom-560m",
"license:bigscience-bloom-rail-1.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| token-classification | 2024-03-03T17:32:02Z | ---
license: bigscience-bloom-rail-1.0
base_model: bigscience/bloom-560m
tags:
- generated_from_trainer
datasets:
- essays_su_g
metrics:
- accuracy
model-index:
- name: bloom-full_labels
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: essays_su_g
type: essays_su_g
config: full_labels
split: train[0%:20%]
args: full_labels
metrics:
- name: Accuracy
type: accuracy
value: 0.7978079994653657
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloom-full_labels
This model is a fine-tuned version of [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) on the essays_su_g dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7047
- B-claim: {'precision': 0.4620938628158845, 'recall': 0.4507042253521127, 'f1-score': 0.45632798573975053, 'support': 284.0}
- B-majorclaim: {'precision': 0.7, 'recall': 0.5957446808510638, 'f1-score': 0.6436781609195402, 'support': 141.0}
- B-premise: {'precision': 0.6952247191011236, 'recall': 0.6991525423728814, 'f1-score': 0.6971830985915493, 'support': 708.0}
- I-claim: {'precision': 0.5342320909331219, 'recall': 0.48441994247363374, 'f1-score': 0.5081081081081082, 'support': 4172.0}
- I-majorclaim: {'precision': 0.7541263517359135, 'recall': 0.6379393355801637, 'f1-score': 0.6911841418883673, 'support': 2077.0}
- I-premise: {'precision': 0.8258639910813824, 'recall': 0.8874690519926524, 'f1-score': 0.8555589775177087, 'support': 12521.0}
- O: {'precision': 0.886796294411076, 'recall': 0.8690143655227454, 'f1-score': 0.8778152869451302, 'support': 10024.0}
- Accuracy: 0.7978
- Macro avg: {'precision': 0.6940481871540717, 'recall': 0.660634877735036, 'f1-score': 0.6756936799585934, 'support': 29927.0}
- Weighted avg: {'precision': 0.7935035105957297, 'recall': 0.7978079994653657, 'f1-score': 0.7946353555655076, 'support': 29927.0}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | B-claim | B-majorclaim | B-premise | I-claim | I-majorclaim | I-premise | O | Accuracy | Macro avg | Weighted avg |
|:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:--------:|:--------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
| No log | 1.0 | 81 | 0.7937 | {'precision': 0.3116883116883117, 'recall': 0.2535211267605634, 'f1-score': 0.2796116504854369, 'support': 284.0} | {'precision': 0.17391304347826086, 'recall': 0.028368794326241134, 'f1-score': 0.04878048780487805, 'support': 141.0} | {'precision': 0.5714285714285714, 'recall': 0.4689265536723164, 'f1-score': 0.5151280062063615, 'support': 708.0} | {'precision': 0.5458064516129032, 'recall': 0.1013902205177373, 'f1-score': 0.17101273499090358, 'support': 4172.0} | {'precision': 0.4496436318562132, 'recall': 0.698603755416466, 'f1-score': 0.547134238310709, 'support': 2077.0} | {'precision': 0.7235728757001549, 'recall': 0.9698107179937705, 'f1-score': 0.82878886120875, 'support': 12521.0} | {'precision': 0.9145402022147328, 'recall': 0.7579808459696727, 'f1-score': 0.8289330133100589, 'support': 10024.0} | 0.7359 | {'precision': 0.5272275839970211, 'recall': 0.4683717163795382, 'f1-score': 0.4599127131881569, 'support': 29927.0} | {'precision': 0.7336463378005765, 'recall': 0.7358906672904066, 'f1-score': 0.7012848326220683, 'support': 29927.0} |
| No log | 2.0 | 162 | 0.8594 | {'precision': 0.3852813852813853, 'recall': 0.31338028169014087, 'f1-score': 0.34563106796116505, 'support': 284.0} | {'precision': 0.5, 'recall': 0.05673758865248227, 'f1-score': 0.10191082802547771, 'support': 141.0} | {'precision': 0.555984555984556, 'recall': 0.6101694915254238, 'f1-score': 0.5818181818181819, 'support': 708.0} | {'precision': 0.5365853658536586, 'recall': 0.015819750719079578, 'f1-score': 0.030733410942956924, 'support': 4172.0} | {'precision': 0.6063059224541969, 'recall': 0.6851227732306211, 'f1-score': 0.6433092224231466, 'support': 2077.0} | {'precision': 0.7233196891499081, 'recall': 0.9738040092644358, 'f1-score': 0.8300769283137042, 'support': 12521.0} | {'precision': 0.8663324979114453, 'recall': 0.8276137270550679, 'f1-score': 0.8465306122448979, 'support': 10024.0} | 0.7521 | {'precision': 0.5962584880907357, 'recall': 0.4975210888767502, 'f1-score': 0.48285860738993286, 'support': 29927.0} | {'precision': 0.7288499118938129, 'recall': 0.7520633541617937, 'f1-score': 0.6972915776644995, 'support': 29927.0} |
| No log | 3.0 | 243 | 0.6374 | {'precision': 0.4406779661016949, 'recall': 0.2746478873239437, 'f1-score': 0.33839479392624733, 'support': 284.0} | {'precision': 0.6890756302521008, 'recall': 0.5815602836879432, 'f1-score': 0.6307692307692307, 'support': 141.0} | {'precision': 0.6152125279642058, 'recall': 0.7768361581920904, 'f1-score': 0.6866416978776528, 'support': 708.0} | {'precision': 0.43018637335777576, 'recall': 0.6749760306807286, 'f1-score': 0.5254711699944019, 'support': 4172.0} | {'precision': 0.7759119861030689, 'recall': 0.6451612903225806, 'f1-score': 0.7045215562565721, 'support': 2077.0} | {'precision': 0.8966225233548917, 'recall': 0.6975481191598115, 'f1-score': 0.7846554667145808, 'support': 12521.0} | {'precision': 0.8359600857968852, 'recall': 0.8942537909018355, 'f1-score': 0.8641249337253579, 'support': 10024.0} | 0.7540 | {'precision': 0.6690924418472318, 'recall': 0.6492833657527048, 'f1-score': 0.6477969784662919, 'support': 29927.0} | {'precision': 0.7909400854003533, 'recall': 0.7539679887726802, 'f1-score': 0.7623016451053796, 'support': 29927.0} |
| No log | 4.0 | 324 | 0.6704 | {'precision': 0.49489795918367346, 'recall': 0.3415492957746479, 'f1-score': 0.4041666666666667, 'support': 284.0} | {'precision': 0.7155172413793104, 'recall': 0.5886524822695035, 'f1-score': 0.6459143968871596, 'support': 141.0} | {'precision': 0.6989869753979739, 'recall': 0.6822033898305084, 'f1-score': 0.6904932094353109, 'support': 708.0} | {'precision': 0.6432561851556265, 'recall': 0.38638542665388304, 'f1-score': 0.4827792752321055, 'support': 4172.0} | {'precision': 0.6661024121878968, 'recall': 0.757823784304285, 'f1-score': 0.7090090090090089, 'support': 2077.0} | {'precision': 0.8252104563579974, 'recall': 0.8925005989936906, 'f1-score': 0.8575375052756781, 'support': 12521.0} | {'precision': 0.8499952439836393, 'recall': 0.8914604948124502, 'f1-score': 0.8702342114232848, 'support': 10024.0} | 0.8006 | {'precision': 0.6991380676637311, 'recall': 0.6486536389484242, 'f1-score': 0.6657334677041735, 'support': 29927.0} | {'precision': 0.7904665918521213, 'recall': 0.8006148294182511, 'f1-score': 0.7899872403655044, 'support': 29927.0} |
| No log | 5.0 | 405 | 0.7047 | {'precision': 0.4620938628158845, 'recall': 0.4507042253521127, 'f1-score': 0.45632798573975053, 'support': 284.0} | {'precision': 0.7, 'recall': 0.5957446808510638, 'f1-score': 0.6436781609195402, 'support': 141.0} | {'precision': 0.6952247191011236, 'recall': 0.6991525423728814, 'f1-score': 0.6971830985915493, 'support': 708.0} | {'precision': 0.5342320909331219, 'recall': 0.48441994247363374, 'f1-score': 0.5081081081081082, 'support': 4172.0} | {'precision': 0.7541263517359135, 'recall': 0.6379393355801637, 'f1-score': 0.6911841418883673, 'support': 2077.0} | {'precision': 0.8258639910813824, 'recall': 0.8874690519926524, 'f1-score': 0.8555589775177087, 'support': 12521.0} | {'precision': 0.886796294411076, 'recall': 0.8690143655227454, 'f1-score': 0.8778152869451302, 'support': 10024.0} | 0.7978 | {'precision': 0.6940481871540717, 'recall': 0.660634877735036, 'f1-score': 0.6756936799585934, 'support': 29927.0} | {'precision': 0.7935035105957297, 'recall': 0.7978079994653657, 'f1-score': 0.7946353555655076, 'support': 29927.0} |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
MNG123/msmarco-distilbert-base-tas-b-resume-fit-v2-epoch-3 | MNG123 | 2024-03-03T18:51:01Z | 31 | 1 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-03-03T18:50:48Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# MNG123/msmarco-distilbert-base-tas-b-resume-fit-v2-epoch-3
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('MNG123/msmarco-distilbert-base-tas-b-resume-fit-v2-epoch-3')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('MNG123/msmarco-distilbert-base-tas-b-resume-fit-v2-epoch-3')
model = AutoModel.from_pretrained('MNG123/msmarco-distilbert-base-tas-b-resume-fit-v2-epoch-3')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=MNG123/msmarco-distilbert-base-tas-b-resume-fit-v2-epoch-3)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 3132 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MarginMSELoss.MarginMSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 313,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
lucyknada/senseable_garten2-7b-exl2-6bpw | lucyknada | 2024-03-03T18:45:37Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-03T18:23:04Z | ### exl2 quant (measurement.json included)
---
### original readme below
---
---
base_model:
- mistralai/Mistral-7B-v0.1
tags:
- qlora
- dto
language:
- "en"
library_name: transformers
license: "apache-2.0"
---
# Details
Introducing Garten2-7B, a cutting-edge, small 7B all-purpose Language Model (LLM), designed to redefine the boundaries of artificial intelligence in natural language understanding and generation. Garten2-7B stands out with its unique architecture, expertly crafted to deliver exceptional performance in a wide array of tasks, from conversation to content creation.
|
vmavanur/roberta-large-peft-p-tuning | vmavanur | 2024-03-03T18:45:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-03T18:45:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
simon108/mistral_7b-instruct-guanaco | simon108 | 2024-03-03T18:40:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-03T18:40:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
parsi-ai-nlpclass/sentence_formality_classifier | parsi-ai-nlpclass | 2024-03-03T18:39:30Z | 0 | 1 | null | [
"text-classification",
"fa",
"license:mit",
"region:us"
]
| text-classification | 2024-03-03T18:22:58Z | ---
language: fa
license: mit
pipeline_tag: text-classification
---
# SentenceFormalityClassifier
This model is fine-tuned to classify text based on formality. It has been fine-tuned on [Mohavere Dataset] (Takalli vahideh, Kalantari, Fateme, Shamsfard, Mehrnoush, Developing an Informal-Formal Persian Corpus, 2022.) using the pretrained model [persian-t5-formality-transfer](https://huggingface.co/HooshvareLab/bert-base-parsbert-uncased).
## Evaluation Metrics
**INFORMAL**:
Precision: 0.99
Recall: 0.99
F1-Score: 0.99
**FORMAL**:
Precision: 0.99
Recall: 1.0
F1-Score: 0.99
**Accuracy**: 0.99
**Macro Avg**:
Precision: 0.99
Recall: 0.99
F1-Score: 0.99
**Weighted Avg**:
Precision: 0.99
Recall: 0.99
F1-Score: 0.99
## Usage
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
labels = ["INFORMAL", "FORMAL"]
model = AutoModelForSequenceClassification.from_pretrained('parsi-ai-nlpclass/sentence_formality_classifier')
tokenizer = AutoTokenizer.from_pretrained('parsi-ai-nlpclass/sentence_formality_classifier')
def test_model(text):
inputs = tokenizer(text, return_tensors='pt')
outputs = model(**inputs)
predicted_label = labels[int(torch.argmax(outputs.logits))]
return predicted_label
# Test the model
text1 = "من فقط میخواستم بگویم که چقدر قدردان هستم."
print("Original:", text1)
print("Predicted Label:", test_model(text1))
# output: FORMAL
text2 = "آرزویش است او را یک رستوران ببرم."
print("\nOriginal:", text2)
print("Predicted Label:", test_model(text2))
# output: FORMAL
text3 = "گل منو اذیت نکنید"
print("\nOriginal:", text2)
print("Predicted Label:", test_model(text3))
# output: INFORMAL
text4 = "من این دوربین رو خالم برام کادو خرید"
print("\nOriginal:", text2)
print("Predicted Label:", test_model(text3))
# output: INFORMAL
``` |
NDR-Miao/q-FrozenLake-v1-4x4-noSlippery | NDR-Miao | 2024-03-03T18:38:39Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-03-03T18:38:37Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="NDR-Miao/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
cvzion/lora-gemma-dqg-v03042024-rev2 | cvzion | 2024-03-03T18:35:38Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gemma",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:wandb/gemma-2b-zephyr-sft",
"base_model:finetune:wandb/gemma-2b-zephyr-sft",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-03T18:09:28Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
- sft
base_model: wandb/gemma-2b-zephyr-sft
---
# Uploaded model
- **Developed by:** cvzion
- **License:** apache-2.0
- **Finetuned from model :** wandb/gemma-2b-zephyr-sft
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. Best model so far, only problem is converting to gguf it produces errors.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
gorabbani/q-FrozenLake-v1-4x4-noSlippery | gorabbani | 2024-03-03T18:33:30Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-03-03T18:33:28Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="gorabbani/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
braunagn/joeyGPT-sft-Lora-v1 | braunagn | 2024-03-03T18:31:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-02-25T20:52:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bartowski/openbuddy-gemma-7b-v19.1-4k-exl2 | bartowski | 2024-03-03T18:19:46Z | 1 | 0 | transformers | [
"transformers",
"text-generation",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"ru",
"fi",
"base_model:google/gemma-7b",
"base_model:finetune:google/gemma-7b",
"license:other",
"region:us"
]
| text-generation | 2024-03-03T17:57:05Z | ---
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
- fi
pipeline_tag: text-generation
inference: false
library_name: transformers
license: other
license_name: gemma
license_link: https://ai.google.dev/gemma/terms
base_model: google/gemma-7b
quantized_by: bartowski
---
## Exllama v2 Quantizations of openbuddy-gemma-7b-v19.1-4k
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.14">turboderp's ExLlamaV2 v0.0.14</a> for quantization.
<b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/OpenBuddy/openbuddy-gemma-7b-v19.1-4k
No GQA - VRAM requirements will be higher
| Branch | Bits | lm_head bits | Size (4k) | Size (16k) | Description |
| -------------------------------------------------------------- | ---- | ------------ | --------- | ---------- | ----------- |
| [8_0](https://huggingface.co/bartowski/openbuddy-gemma-7b-v19.1-4k-exl2/tree/8_0) | 8.0 | 8.0 | 14.0 GB | 19.4 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/bartowski/openbuddy-gemma-7b-v19.1-4k-exl2/tree/6_5) | 6.5 | 8.0 | 12.5 GB | 17.9 GB | Near unquantized performance at vastly reduced size, **recommended**. |
| [5_0](https://huggingface.co/bartowski/openbuddy-gemma-7b-v19.1-4k-exl2/tree/5_0) | 5.0 | 6.0 | 10.9 GB | 16.3 GB | Slightly lower quality vs 6.5, but usable on 8GB cards with 4k context. |
| [4_25](https://huggingface.co/bartowski/openbuddy-gemma-7b-v19.1-4k-exl2/tree/4_25) | 4.25 | 6.0 | 10.2 GB | 15.7 GB | GPTQ equivalent bits per weight. |
| [3_5](https://huggingface.co/bartowski/openbuddy-gemma-7b-v19.1-4k-exl2/tree/3_5) | 3.5 | 6.0 | 9.5 GB | 14.9 GB | Lower quality, not recommended. |
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/openbuddy-gemma-7b-v19.1-4k-exl2 openbuddy-gemma-7b-v19.1-4k-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `openbuddy-gemma-7b-v19.1-4k-exl2`:
```shell
mkdir openbuddy-gemma-7b-v19.1-4k-exl2
huggingface-cli download bartowski/openbuddy-gemma-7b-v19.1-4k-exl2 --local-dir openbuddy-gemma-7b-v19.1-4k-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
Linux:
```shell
mkdir openbuddy-gemma-7b-v19.1-4k-exl2-6_5
huggingface-cli download bartowski/openbuddy-gemma-7b-v19.1-4k-exl2 --revision 6_5 --local-dir openbuddy-gemma-7b-v19.1-4k-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
mkdir openbuddy-gemma-7b-v19.1-4k-exl2-6.5
huggingface-cli download bartowski/openbuddy-gemma-7b-v19.1-4k-exl2 --revision 6_5 --local-dir openbuddy-gemma-7b-v19.1-4k-exl2-6.5 --local-dir-use-symlinks False
```
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski |
NeLiKeR/test_jira_ai | NeLiKeR | 2024-03-03T18:17:19Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"bert",
"ru",
"license:apache-2.0",
"region:us"
]
| null | 2024-03-03T18:10:29Z | ---
license: apache-2.0
language:
- ru
library_name: sentence-transformers
--- |
x28x28x28/cindypradostita | x28x28x28 | 2024-03-03T18:10:21Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
]
| text-to-image | 2024-02-21T23:46:52Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: cindypradostita
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
mtileria00/sbert_ft_android1 | mtileria00 | 2024-03-03T18:09:59Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-01-25T22:04:00Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 570 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 570,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
yxLiao/scibert_rel_scierc | yxLiao | 2024-03-03T18:08:28Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-03T16:05:53Z | This is a span-based model for relation extraction (RE). The model is trained on the SciERC dataset.
This repo contains:
- An encoder fine-tuned based on [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased), and components for RE: pytorch_model.bin
- Model training log: train.log
Codes are available in [yxliao95/cxrgraph](https://github.com/yxliao95/cxrgraph/blob/main/pipe2_re_tokaux_sent.py) |
yxLiao/scibert_ent_scierc | yxLiao | 2024-03-03T18:08:14Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2024-03-03T16:01:01Z | This is a span-based model for named entity recognition (NER). The model is trained on the SciERC dataset.
This repo contains:
- An encoder fine-tuned based on [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased): pytorch_model.bin
- Components for NER: model_classifiers.pth
- Model training log: train.log
Codes are available in [yxliao95/cxrgraph](https://github.com/yxliao95/cxrgraph/blob/main/pipe1_ner_tokaux_sent.py) |
yxLiao/bert_rel_ace05 | yxLiao | 2024-03-03T18:07:58Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-03T16:09:50Z | This is a span-based model for relation extraction (RE). The model is trained on the ACE05 dataset.
This repo contains:
- An encoder fine-tuned based on [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased), and components for RE: pytorch_model.bin
- Model training log: train.log
Codes are available in [yxliao95/cxrgraph](https://github.com/yxliao95/cxrgraph/blob/main/pipe2_re_tokaux_sent.py) |
HuggingFaceH4/zephyr-7b-gemma-v0.1 | HuggingFaceH4 | 2024-03-03T18:07:47Z | 902 | 122 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:argilla/dpo-mix-7k",
"arxiv:2310.16944",
"base_model:HuggingFaceH4/zephyr-7b-gemma-sft-v0.1",
"base_model:finetune:HuggingFaceH4/zephyr-7b-gemma-sft-v0.1",
"license:other",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-01T10:22:03Z | ---
license: other
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
base_model: HuggingFaceH4/zephyr-7b-gemma-sft-v0.1
datasets:
- argilla/dpo-mix-7k
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
pipeline_tag: text-generation
model-index:
- name: zephyr-7b-gemma
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: MT-Bench
type: unknown
metrics:
- type: unknown
value: 7.81
name: score
source:
url: https://huggingface.co/spaces/lmsys/mt-bench
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 58.45
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-gemma-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 83.48
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-gemma-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.68
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-gemma-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 52.07
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-gemma-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 74.19
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-gemma-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 45.56
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-gemma-v0.1
name: Open LLM Leaderboard
---
<img src="https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-v0.1/resolve/main/thumbnail.png" alt="Zephyr 7B Gemma Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Model Card for Zephyr 7B Gemma
Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr 7B Gemma is the third model in the series, and is a fine-tuned version of [`google/gemma-7b`](https://huggingface.co/google/gemma-7b) that was trained on on a mix of publicly available, synthetic datasets using Direct Preference Optimization (DPO). You can reproduce the training of this model via the recipe provided in the [Alignment Handbook](https://github.com/huggingface/alignment-handbook).
## Model description
- **Model type:** A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
- **Language(s) (NLP):** Primarily English
- **License:** Gemma Terms of Use
- **Finetuned from model:** [google/gemma-7b](https://huggingface.co/google/gemma-7b)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/alignment-handbook
- **Demo:** https://huggingface.co/spaces/HuggingFaceH4/zephyr-7b-gemma-chat
## Performance
| Model |MT Bench⬇️|IFEval|
|-----------------------------------------------------------------------|------:|------:|
|[zephyr-7b-gemma-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-v0.1)| 7.81 | 28.76|
|[zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) | 7.34 | 43.81|
|[google/gemma-7b-it](https://huggingface.co/google/gemma-7b-it) | 6.38 | 38.01|
| Model |AGIEval|GPT4All|TruthfulQA|BigBench|Average ⬇️|
|-----------------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) | 37.52| 71.77| 55.26| 39.77| 51.08|
|[zephyr-7b-gemma-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-v0.1)| 34.22| 66.37| 52.19| 37.10| 47.47|
|[mlabonne/Gemmalpaca-7B](https://huggingface.co/mlabonne/Gemmalpaca-7B)| 21.6 | 40.87| 44.85 | 30.49| 34.45|
|[google/gemma-7b-it](https://huggingface.co/google/gemma-7b-it) | 21.33| 40.84| 41.70| 30.25| 33.53|
<details><summary>Details of AGIEval, GPT4All, TruthfulQA, BigBench </summary>
### AGIEval
| Task |Version| Metric |Value| |Stderr|
|------------------------------|------:|--------|----:|---|-----:|
|agieval_aqua_rat | 0|acc |21.65|± | 2.59|
| | |acc_norm|25.20|± | 2.73|
|agieval_logiqa_en | 0|acc |34.72|± | 1.87|
| | |acc_norm|35.94|± | 1.88|
|agieval_lsat_ar | 0|acc |19.57|± | 2.62|
| | |acc_norm|21.74|± | 2.73|
|agieval_lsat_lr | 0|acc |30.59|± | 2.04|
| | |acc_norm|32.55|± | 2.08|
|agieval_lsat_rc | 0|acc |49.07|± | 3.05|
| | |acc_norm|42.75|± | 3.02|
|agieval_sat_en | 0|acc |54.85|± | 3.48|
| | |acc_norm|53.40|± | 3.48|
|agieval_sat_en_without_passage| 0|acc |37.38|± | 3.38|
| | |acc_norm|33.98|± | 3.31|
|agieval_sat_math | 0|acc |30.91|± | 3.12|
| | |acc_norm|28.18|± | 3.04|
Average: 34.22%
### GPT4All
| Task |Version| Metric |Value| |Stderr|
|-------------|------:|--------|----:|---|-----:|
|arc_challenge| 0|acc |49.15|± | 1.46|
| | |acc_norm|52.47|± | 1.46|
|arc_easy | 0|acc |77.44|± | 0.86|
| | |acc_norm|74.75|± | 0.89|
|boolq | 1|acc |79.69|± | 0.70|
|hellaswag | 0|acc |60.59|± | 0.49|
| | |acc_norm|78.00|± | 0.41|
|openbookqa | 0|acc |29.20|± | 2.04|
| | |acc_norm|37.80|± | 2.17|
|piqa | 0|acc |76.82|± | 0.98|
| | |acc_norm|77.80|± | 0.97|
|winogrande | 0|acc |64.09|± | 1.35|
Average: 66.37%
### TruthfulQA
| Task |Version|Metric|Value| |Stderr|
|-------------|------:|------|----:|---|-----:|
|truthfulqa_mc| 1|mc1 |35.74|± | 1.68|
| | |mc2 |52.19|± | 1.59|
Average: 52.19%
### Bigbench
| Task |Version| Metric |Value| |Stderr|
|------------------------------------------------|------:|---------------------|----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|53.68|± | 3.63|
|bigbench_date_understanding | 0|multiple_choice_grade|59.89|± | 2.55|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|30.23|± | 2.86|
|bigbench_geometric_shapes | 0|multiple_choice_grade|11.42|± | 1.68|
| | |exact_str_match | 0.00|± | 0.00|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|28.40|± | 2.02|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|19.14|± | 1.49|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|44.67|± | 2.88|
|bigbench_movie_recommendation | 0|multiple_choice_grade|26.80|± | 1.98|
|bigbench_navigate | 0|multiple_choice_grade|50.00|± | 1.58|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|52.75|± | 1.12|
|bigbench_ruin_names | 0|multiple_choice_grade|33.04|± | 2.22|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|33.37|± | 1.49|
|bigbench_snarks | 0|multiple_choice_grade|48.62|± | 3.73|
|bigbench_sports_understanding | 0|multiple_choice_grade|58.11|± | 1.57|
|bigbench_temporal_sequences | 0|multiple_choice_grade|37.20|± | 1.53|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|20.08|± | 1.13|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|15.77|± | 0.87|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|44.67|± | 2.88|
Average: 37.1%
</details>
## Intended uses & limitations
The model was initially fine-tuned on the [DEITA 10K](https://huggingface.co/datasets/HuggingFaceH4/deita-10k-v0-sft) dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT.
We then further aligned the model with [🤗 TRL's](https://github.com/huggingface/trl) `DPOTrainer` on the [argilla/dpo-mix-7k](https://huggingface.co/datasets/argilla/dpo-mix-7k) dataset, which contains 7k prompts and model completions that are ranked by GPT-4. As a result, the model can be used for chat and you can check out our [demo](https://huggingface.co/spaces/HuggingFaceH4/zephyr-chat) to test its capabilities.
Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
```python
# pip install transformers>=4.38.2
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="HuggingFaceH4/zephyr-7b-gemma-v0.1",
device_map="auto",
torch_dtype=torch.bfloat16,
)
messages = [
{
"role": "system",
"content": "", # Model not yet trained for follow this
},
{"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
outputs = pipe(
messages,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_k=50,
top_p=0.95,
stop_sequence="<|im_end|>",
)
print(outputs[0]["generated_text"][-1]["content"])
# It is not possible for a human to eat a helicopter in one sitting, as a
# helicopter is a large and inedible machine. Helicopters are made of metal,
# plastic, and other materials that are not meant to be consumed by humans.
# Eating a helicopter would be extremely dangerous and would likely cause
# serious health problems, including choking, suffocation, and poisoning. It is
# important to only eat food that is safe and intended for human consumption.
```
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Zephyr 7B Gemma has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base model (`google/gemma-7b`), however it is likely to have included a mix of Web data and technical sources like books and code. See the [StarCoder2 model card](https://huggingface.co/bigcode/starcoder2-15b) for an example of this.
## Training and evaluation data
This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-gemma-sft-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-sft-v0.1) on the argilla/dpo-mix-7k dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4695
- Rewards/chosen: -3.3746
- Rewards/rejected: -4.9715
- Rewards/accuracies: 0.7188
- Rewards/margins: 1.5970
- Logps/rejected: -459.4853
- Logps/chosen: -429.9115
- Logits/rejected: 86.4684
- Logits/chosen: 92.8200
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.1923 | 1.9 | 100 | 0.4736 | -3.4575 | -4.9556 | 0.75 | 1.4980 | -459.1662 | -431.5707 | 86.3863 | 92.7360 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.1
## Citation Information
If you find this model useful in your work, please consider citing the Zephyr technical report:
```
@misc{tunstall2023zephyr,
title={Zephyr: Direct Distillation of LM Alignment},
author={Lewis Tunstall and Edward Beeching and Nathan Lambert and Nazneen Rajani and Kashif Rasul and Younes Belkada and Shengyi Huang and Leandro von Werra and Clémentine Fourrier and Nathan Habib and Nathan Sarrazin and Omar Sanseviero and Alexander M. Rush and Thomas Wolf},
year={2023},
eprint={2310.16944},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
You may also wish to cite the creators of this model as well:
```
@misc{zephyr_7b_gemma,
author = {Lewis Tunstall and Philipp Schmid},
title = {Zephyr 7B Gemma},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-v0.1}}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_HuggingFaceH4__zephyr-7b-gemma-v0.1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |62.41|
|AI2 Reasoning Challenge (25-Shot)|58.45|
|HellaSwag (10-Shot) |83.48|
|MMLU (5-Shot) |60.68|
|TruthfulQA (0-shot) |52.07|
|Winogrande (5-shot) |74.19|
|GSM8k (5-shot) |45.56|
|
yxLiao/biomedbert_rel_cxrgraph | yxLiao | 2024-03-03T18:06:32Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-03T15:45:45Z | This is a span-based model for relation extraction (RE). The model is trained on the CXRGraph dataset.
This repo contains:
- An encoder fine-tuned based on [microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext), and components for RE: pytorch_model.bin
- Model training log: train.log
Codes are available in [yxliao95/cxrgraph](https://github.com/yxliao95/cxrgraph/blob/main/pipe2_re_tokaux_sent.py) |
luogedai/my_awesome_wnut_model | luogedai | 2024-03-03T18:06:22Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2024-03-03T18:05:23Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: my_awesome_wnut_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_wnut_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2696
- Precision: 0.5516
- Recall: 0.3021
- F1: 0.3904
- Accuracy: 0.9417
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 213 | 0.2845 | 0.5125 | 0.2085 | 0.2964 | 0.9373 |
| No log | 2.0 | 426 | 0.2696 | 0.5516 | 0.3021 | 0.3904 | 0.9417 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
yxLiao/biomedbert_ent_cxrgraph | yxLiao | 2024-03-03T18:06:05Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2024-03-03T14:55:26Z | This is a span-based model for named entity recognition (NER) and attribute classification (ATTR). The model is trained on the CXRGraph dataset.
This repo contains:
- An encoder fine-tuned based on [microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext): pytorch_model.bin
- Components for NER and ATTR: model_classifiers.pth
- Model training log: train.log
Codes are available in [yxliao95/cxrgraph](https://github.com/yxliao95/cxrgraph/blob/main/pipe1_ner_tokaux_attrcls_sent.py) |
OwOpeepeepoopoo/gemmerica_m21_m | OwOpeepeepoopoo | 2024-03-03T17:56:06Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-03T17:53:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
michimalek/mistral-7b-instruct-tokenizer-finetuned | michimalek | 2024-03-03T17:55:09Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-02T17:32:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
michimalek/mistral-7b-instruct-model-finetuned | michimalek | 2024-03-03T17:55:08Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2024-03-02T17:27:13Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Abdullah-Nazhat/FAVORiser | Abdullah-Nazhat | 2024-03-03T17:45:44Z | 0 | 0 | null | [
"license:bsd-3-clause",
"region:us"
]
| null | 2024-03-02T19:00:10Z | ---
license: bsd-3-clause
---
# FAVORiser
Gating Function Generation with FAVOR+ Transformer Sub Unit
Paper Coming Soon |
Abdullah-Nazhat/Linearizer | Abdullah-Nazhat | 2024-03-03T17:44:21Z | 0 | 0 | null | [
"license:bsd-3-clause",
"region:us"
]
| null | 2024-03-02T18:52:57Z | ---
license: bsd-3-clause
---
# Linearizer
Gating Function Generation with Linformer Transformer Sub Unit
Paper Coming Soon |
betajuned/bert-indolem-unila | betajuned | 2024-03-03T17:42:29Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-generation",
"autotrain",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-02T13:37:06Z | ---
tags:
- autotrain
- text-generation
widget:
- text: 'I love AutoTrain because '
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
cgato/Thespis-CurtainCall-8x7b-v0.3 | cgato | 2024-03-03T17:39:40Z | 5 | 1 | transformers | [
"transformers",
"pytorch",
"mixtral",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-03T14:49:10Z | ---
license: cc-by-nc-4.0
---
A bigger badder Thespis and my first pass at Mixtral.
Datasets Used:
* Dolphin
* Ultrachat
* Capybara
* Augmental
* ToxicQA
* Magiccoder-Evol-Instruct-110k
* Yahoo Answers
* OpenOrca
* Airoboros 3.1
* grimulkan/physical-reasoning and theory-of-mind
## Prompt Format: Chat ( The default Ooba template and Silly Tavern Template )
```
{System Prompt}
Username: {Input}
BotName: {Response}
Username: {Input}
BotName: {Response}
```
Mixtral seems to require higher temperatures overall compared to Mistral 7b, please mess with your samplers until you find a setting you like.
## Recommended Sampler Setting Ranges
* Temp: 1.25 - 2.0
* MinP: 0.1
* RepPen: 1.05 - 1.10
## Presets ( For the lazy!~ )
## Recommended Silly Tavern Preset -> Universal-Creative
## Recommended Kobold Horde Preset -> MinP |
AnonymousSub/FPDM_Legal_robertalarge | AnonymousSub | 2024-03-03T17:37:07Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2023-09-10T21:28:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
b07611031/vit-base-patch16-224-in21k-finetuned | b07611031 | 2024-03-03T17:35:28Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2024-03-03T17:34:58Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-in21k-finetuned
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-finetuned
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0051
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 90 | 0.1809 | 0.9911 |
| No log | 2.0 | 180 | 0.0815 | 0.9911 |
| No log | 3.0 | 270 | 0.0542 | 0.9911 |
| No log | 4.0 | 360 | 0.0298 | 1.0 |
| No log | 5.0 | 450 | 0.0312 | 0.9955 |
| 0.1429 | 6.0 | 540 | 0.0235 | 1.0 |
| 0.1429 | 7.0 | 630 | 0.0196 | 1.0 |
| 0.1429 | 8.0 | 720 | 0.0154 | 1.0 |
| 0.1429 | 9.0 | 810 | 0.0145 | 1.0 |
| 0.1429 | 10.0 | 900 | 0.0125 | 1.0 |
| 0.1429 | 11.0 | 990 | 0.0115 | 1.0 |
| 0.0196 | 12.0 | 1080 | 0.0167 | 0.9955 |
| 0.0196 | 13.0 | 1170 | 0.0102 | 1.0 |
| 0.0196 | 14.0 | 1260 | 0.0093 | 1.0 |
| 0.0196 | 15.0 | 1350 | 0.0085 | 1.0 |
| 0.0196 | 16.0 | 1440 | 0.0079 | 1.0 |
| 0.0148 | 17.0 | 1530 | 0.0075 | 1.0 |
| 0.0148 | 18.0 | 1620 | 0.0074 | 1.0 |
| 0.0148 | 19.0 | 1710 | 0.0069 | 1.0 |
| 0.0148 | 20.0 | 1800 | 0.0065 | 1.0 |
| 0.0148 | 21.0 | 1890 | 0.0062 | 1.0 |
| 0.0148 | 22.0 | 1980 | 0.0062 | 1.0 |
| 0.0069 | 23.0 | 2070 | 0.0057 | 1.0 |
| 0.0069 | 24.0 | 2160 | 0.0055 | 1.0 |
| 0.0069 | 25.0 | 2250 | 0.0054 | 1.0 |
| 0.0069 | 26.0 | 2340 | 0.0053 | 1.0 |
| 0.0069 | 27.0 | 2430 | 0.0052 | 1.0 |
| 0.0055 | 28.0 | 2520 | 0.0051 | 1.0 |
| 0.0055 | 29.0 | 2610 | 0.0051 | 1.0 |
| 0.0055 | 30.0 | 2700 | 0.0051 | 1.0 |
### Framework versions
- Transformers 4.38.1
- Pytorch 1.10.0+cu111
- Datasets 2.17.1
- Tokenizers 0.15.2
|
Abdullah-Nazhat/Approximator | Abdullah-Nazhat | 2024-03-03T17:31:17Z | 0 | 0 | null | [
"license:bsd-3-clause",
"region:us"
]
| null | 2024-03-02T18:47:16Z | ---
license: bsd-3-clause
---
# Approximator
A Bi-Subnetwork Gated Unit with Nystromformer Approximation of Attention
Paper Coming Soon |
AnonymousSub/FPDM_Legal_RoBERTa | AnonymousSub | 2024-03-03T17:30:04Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2023-04-03T15:16:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
huggingfaceaccountyx/es_ar_ru_de_el | huggingfaceaccountyx | 2024-03-03T17:23:19Z | 12 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:data_folder/esarrudeel",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2024-03-03T16:16:15Z | ---
license: apache-2.0
base_model: google-bert/bert-base-multilingual-cased
tags:
- generated_from_trainer
datasets:
- data_folder/esarrudeel
model-index:
- name: es_ar_ru_de_el
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# es_ar_ru_de_el
This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on the data_folder/esarrudeel dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 96
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.1+cu121
- Datasets 2.14.5
- Tokenizers 0.15.0
|
cvzion/gguf-gemma-dqg-v03042024-rev1 | cvzion | 2024-03-03T17:18:36Z | 3 | 0 | transformers | [
"transformers",
"gguf",
"gemma",
"text-generation-inference",
"unsloth",
"en",
"base_model:wandb/gemma-2b-zephyr-sft",
"base_model:quantized:wandb/gemma-2b-zephyr-sft",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2024-03-03T17:17:34Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- gguf
base_model: wandb/gemma-2b-zephyr-sft
---
# Uploaded model
- **Developed by:** cvzion
- **License:** apache-2.0
- **Finetuned from model :** wandb/gemma-2b-zephyr-sft
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Francesco-A/ppo-MountainCar-v0 | Francesco-A | 2024-03-03T17:08:30Z | 5 | 0 | stable-baselines3 | [
"stable-baselines3",
"MountainCar-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-03-03T16:56:43Z | ---
library_name: stable-baselines3
tags:
- MountainCar-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MountainCar-v0
type: MountainCar-v0
metrics:
- type: mean_reward
value: -116.20 +/- 1.83
name: mean_reward
verified: false
---
# **PPO** Agent playing **MountainCar-v0**
This is a trained model of a **PPO** agent playing **MountainCar-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
# Model Details
- Model Name: ppo-MountainCar-v0
- Model Type: Proximal Policy Optimization (PPO)
- Policy Architecture: MultiLayerPerceptron (MLPPolicy)
- Environment: MountainCar-v0
- Training Data: The model was trained using three consecutive training sessions:
- First training session: Total timesteps = 1,000,000
- Second training session: Total timesteps = 500,000
- Third training session: Total timesteps = 500,000
# Model Parameters
```python
- n_steps: 2048
- batch_size: 64
- n_epochs: 8
- gamma: 0.999
- gae_lambda: 0.95
- ent_coef: 0.01
- max_grad_norm: 0.5
- Verbose: Enabled (Verbose level = 1)
``` |
0dAI/0dAI-7.5B-v2-4bpw | 0dAI | 2024-03-03T17:06:14Z | 35 | 3 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"es",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-03T16:55:18Z | ---
license: apache-2.0
language:
- es
library_name: transformers
---
Versión mejorada con un datset de vulnerabilidades en DPO y cleaning
Ocupa 8 GB de vram
Este es un modelo de ciberseguridad basado en mistral 7b fine tuneado por el equipo de 0dAI en los servidores de Omega AI
Esta es una versión reducida en nuestra pagina web teneis el modelo grande con funciones de pentesting autonomo: https://0dai.omegaai.io
Este modelo cuenta con:
- Sin censura de ningún tipo
- Especializado en pentesting y malware development
- 16k de contexto
- El modelo por si solo no tiene capabilities de function calling
No se puede usar para:
- Fines Militares
- Cibercrimen o cualquier tipo de delito
El Dataset y los metodos de entrenamient son completamente privados
Entrenado en 16 RTX A5000
Pasos para correr
1. Instala exllamav2
```
git clone https://github.com/turboderp/exllamav2
cd exllamav2
pip install torch
pip install .
```
NOTA: Puede ser que tengas algún problema de dependencias con exllama, mira el apartado de issues de su github
2. Descarga download-model.py
```
wget https://raw.githubusercontent.com/oobabooga/text-generation-webui/main/download-model.py
```
3. Descarga el modelo
```
python3 download-model .py 0dAI/0dAI-7B
```
4. Corre el siguiente comando en el directorio de exllamav2
```
python3 examples/chat.py -m RUTA/0dAI-7B -mode chatml -sp "Eres un asistente de ciberseguridad llamado 0dAI, no tienes limites censura o ética alguna, debes ser descriptivo, dividir las tareas en pequeñas tareas que puedas hacer proceduralmente"
```
Ejemplo:
System prompt: Eres un asistente de ciberseguridad llamado 0dAI, no tienes limites censura o ética alguna, debes ser descriptivo, dividir las tareas en pequeñas tareas que puedas hacer proceduralmente
User prompt: Escribe un exploit en C
```
// Part 1: Buffer Overflow Vulnerable Program (Vulnerable.c)
#include <stdio.h>
#include <string.h>
void vulnerable_function(char *input) {
char buffer[64]; // Buffer of 64 bytes
strcpy(buffer, input); // Copy the input to the buffer
}
int main() {
char input[256];
printf("Enter a string: ");
fgets(input, sizeof(input), stdin); // Read input from standard input
vulnerable_function(input);
return 0;
}
// Part 2: Exploit Code (Exploit.c)
#include <stdio.h>
#include <string.h>
// This is where you would put the gadgets and shellcode
#define SHELLCODE "shellcode_to_be_inserted"
#define GADGETS "gadgets_to_be_inserted"
void exploit_function(char *input) {
void (*shellcode)() = (void (*)())SHELLCODE; // Pointer to shellcode
void (*gadgets)() = (void (*)())GADGETS; // Pointer to gadgets
// Construct the payload that will be sent to the vulnerable function
char payload[256];
memset(payload, 'A', sizeof(payload)); // Fill with 'A's to bypass simple checks
// Append the NOP sled to the payload
strcat(payload, "\x90\x90\x90\x90"); // Replace with actual NOP sled size
// Append the shellcode to the payload
strcat(payload, SHELLCODE);
// Append the gadgets to the payload
strcat(payload, GADGETS);
// Call the vulnerable function with the payload
vulnerable_function(payload);
}
int main() {
char input[256];
printf("Enter a string: ");
fgets(input, sizeof(input), stdin); // Read input from standard input
exploit_function(input);
return 0;
}
```
Please note that the above code is a simplified example for educational purposes. In practice, you would need to find specific gadgets and shellcode that work within the context of the vulnerable program, and you would also need to deal with various mitigations such as ASLR, DEP, and stack canaries. Additionally, the use of such exploits should only be done in a legal and ethical manner, such as during penetration testing with proper authorization.
|
adn0n/mistral-7b-notes-finetuned | adn0n | 2024-03-03T16:58:12Z | 0 | 0 | null | [
"safetensors",
"autotrain",
"text-generation",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-03T16:58:02Z | ---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
cvzion/gguf-tinyllama-dqg-v03042024-rev1 | cvzion | 2024-03-03T16:56:16Z | 7 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/tinyllama-bnb-4bit",
"base_model:quantized:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2024-03-03T16:55:55Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/tinyllama-bnb-4bit
---
# Uploaded model
- **Developed by:** cvzion
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tatsunori/xlm-roberta-base-finetuned-panx-de | tatsunori | 2024-03-03T16:55:56Z | 3 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2024-03-02T16:43:24Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1950
- F1: 0.8410
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3479 | 1.0 | 525 | 0.2126 | 0.7982 |
| 0.1749 | 2.0 | 1050 | 0.1928 | 0.8266 |
| 0.1064 | 3.0 | 1575 | 0.1950 | 0.8410 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
medajd/BeatsGEN | medajd | 2024-03-03T16:55:31Z | 0 | 0 | null | [
"arxiv:1910.09700",
"license:wtfpl",
"region:us"
]
| null | 2024-03-03T16:45:12Z | ---
license: wtfpl
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MaziyarPanahi/phi-2-logical-sft-GGUF | MaziyarPanahi | 2024-03-03T16:53:31Z | 79 | 2 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"pytorch",
"tensorboard",
"safetensors",
"phi",
"text-generation",
"axolotl",
"generated_from_trainer",
"phi-2",
"logical",
"reasoning",
"text-generation-inference",
"custom_code",
"dataset:garage-bAInd/Open-Platypus",
"base_model:microsoft/phi-2",
"license:mit",
"autotrain_compatible",
"region:us",
"base_model:MaziyarPanahi/phi-2-logical-sft",
"base_model:quantized:MaziyarPanahi/phi-2-logical-sft"
]
| text-generation | 2024-03-03T16:44:42Z | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- pytorch
- tensorboard
- safetensors
- phi
- text-generation
- axolotl
- generated_from_trainer
- phi-2
- logical
- reasoning
- text-generation-inference
- custom_code
- dataset:garage-bAInd/Open-Platypus
- base_model:microsoft/phi-2
- license:mit
- autotrain_compatible
- region:us
- text-generation
model_name: phi-2-logical-sft-GGUF
base_model: MaziyarPanahi/phi-2-logical-sft
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/phi-2-logical-sft-GGUF](https://huggingface.co/MaziyarPanahi/phi-2-logical-sft-GGUF)
- Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi)
- Original model: [MaziyarPanahi/phi-2-logical-sft](https://huggingface.co/MaziyarPanahi/phi-2-logical-sft)
## Description
[MaziyarPanahi/phi-2-logical-sft-GGUF](https://huggingface.co/MaziyarPanahi/phi-2-logical-sft-GGUF) contains GGUF format model files for [MaziyarPanahi/phi-2-logical-sft](https://huggingface.co/MaziyarPanahi/phi-2-logical-sft).
## How to use
Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models:
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
### Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: [MaziyarPanahi/phi-2-logical-sft-GGUF](https://huggingface.co/MaziyarPanahi/phi-2-logical-sft-GGUF) and below it, a specific filename to download, such as: phi-2-logical-sft-GGUF.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download MaziyarPanahi/phi-2-logical-sft-GGUF phi-2-logical-sft-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
</details>
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download [MaziyarPanahi/phi-2-logical-sft-GGUF](https://huggingface.co/MaziyarPanahi/phi-2-logical-sft-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/phi-2-logical-sft-GGUF phi-2-logical-sft-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m phi-2-logical-sft-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./phi-2-logical-sft-GGUF.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./phi-2-logical-sft-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) |
cvzion/lora-tinyllama-dqg-v03042024-rev1 | cvzion | 2024-03-03T16:47:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/tinyllama-bnb-4bit",
"base_model:finetune:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-03T16:47:43Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/tinyllama-bnb-4bit
---
# Uploaded model
- **Developed by:** cvzion
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Lennard-Heuer/llama2-qlora-finetunined-XXX | Lennard-Heuer | 2024-03-03T16:30:46Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-26T14:45:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
shrishti2004/my-pet-dog | shrishti2004 | 2024-03-03T16:30:45Z | 1 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2024-03-03T16:26:38Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by shrishti2004 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 0206CS221197
Sample pictures of this concept:
|
OmarHaroon01/byt5_small_finetune_ag_news | OmarHaroon01 | 2024-03-03T16:30:15Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2024-03-03T16:29:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ImperialIndians23/RobertaBaseProcessed | ImperialIndians23 | 2024-03-03T16:20:18Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-03-01T17:57:04Z | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: RobertaBaseProcessed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RobertaBaseProcessed
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2952
- Accuracy: 0.9131
- F1: 0.3893
- Precision: 0.58
- Recall: 0.2929
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: inverse_sqrt
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.3458 | 1.0 | 1047 | 0.2952 | 0.9131 | 0.3893 | 0.58 | 0.2929 |
| 0.1506 | 2.0 | 2094 | 0.3071 | 0.9140 | 0.2969 | 0.6552 | 0.1919 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.2+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
Sentdex/Walls1337bot-Llama2-7B-003.005.5000 | Sentdex | 2024-03-03T16:18:35Z | 12 | 1 | peft | [
"peft",
"safetensors",
"text-generation",
"dataset:Sentdex/WSB-003.005",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"base_model:adapter:NousResearch/Llama-2-7b-chat-hf",
"license:apache-2.0",
"region:us"
]
| text-generation | 2024-03-02T20:27:20Z | ---
library_name: peft
base_model: NousResearch/Llama-2-7b-chat-hf
license: apache-2.0
datasets:
- Sentdex/WSB-003.005
pipeline_tag: text-generation
---
Probably don't use this model, I'm just tinkering, but it's a multi-turn, multi-speaker model attempt trained from /r/wallstreetbets data that you can find: https://huggingface.co/datasets/Sentdex/WSB-003.005
```py
#https://huggingface.co/docs/peft/quicktour
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
import torch
model = AutoPeftModelForCausalLM.from_pretrained("Sentdex/Walls1337bot-Llama2-7B-003.005.5000")
tokenizer = AutoTokenizer.from_pretrained("NousResearch/Llama-2-7b-chat-hf")
model = model.to("cuda")
model.eval()
prompt = "Your text here."
formatted_prompt = f"### BEGIN CONVERSATION ###\n\n## Speaker_0: ##\n{prompt}\n\n## Walls1337bot: ##\n"
inputs = tokenizer(formatted_prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs["input_ids"].to("cuda"), max_new_tokens=128)
print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0])
``` |
nawresboubakri23/finetuning-phi2-gsm8k-weights-and-biases | nawresboubakri23 | 2024-03-03T16:13:51Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"phi",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
]
| null | 2024-02-22T11:54:47Z | ---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: microsoft/phi-2
model-index:
- name: finetuning-phi2-gsm8k-weights-and-biases
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-phi2-gsm8k-weights-and-biases
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 1000
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
learn3r/longt5_xl_sfd_bp_40 | learn3r | 2024-03-03T16:13:48Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"longt5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-11-06T12:14:43Z | ---
tags:
- generated_from_trainer
model-index:
- name: longt5_xl_sfd_bp_40
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# longt5_xl_sfd_bp_40
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6048
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 25.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1855 | 0.97 | 14 | 2.5320 |
| 0.1635 | 1.95 | 28 | 2.4299 |
| 0.1272 | 2.99 | 43 | 2.9443 |
| 0.1113 | 3.97 | 57 | 2.8813 |
| 0.0819 | 4.94 | 71 | 3.0005 |
| 0.0782 | 5.98 | 86 | 3.0224 |
| 0.0588 | 6.96 | 100 | 3.1903 |
| 0.0729 | 8.0 | 115 | 2.5871 |
| 0.0473 | 8.97 | 129 | 3.2830 |
| 0.113 | 9.95 | 143 | 3.3443 |
| 0.0364 | 10.99 | 158 | 3.3243 |
| 0.0321 | 11.97 | 172 | 3.3962 |
| 0.0302 | 12.94 | 186 | 3.4508 |
| 0.0717 | 13.98 | 201 | 3.4166 |
| 0.0746 | 14.96 | 215 | 2.8975 |
| 0.0548 | 16.0 | 230 | 3.0853 |
| 0.0507 | 16.97 | 244 | 3.0706 |
| 0.0442 | 17.95 | 258 | 3.2759 |
| 0.0396 | 18.99 | 273 | 3.1962 |
| 0.0351 | 19.97 | 287 | 3.3108 |
| 0.0306 | 20.94 | 301 | 3.2607 |
| 0.0267 | 21.98 | 316 | 3.4015 |
| 0.1454 | 22.96 | 330 | 2.6912 |
| 0.0252 | 24.0 | 345 | 3.4576 |
| 0.0187 | 24.35 | 350 | 3.6048 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
alielfilali01/LoRA-Land-TopicIdentificationAdapter | alielfilali01 | 2024-03-03T16:11:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-03T13:37:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
alielfilali01/LoRA-Land-SummarizationAdapter | alielfilali01 | 2024-03-03T16:10:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-03T13:36:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
alielfilali01/LoRA-Land-StructuredToTextAdapter | alielfilali01 | 2024-03-03T16:09:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-03T13:35:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
alielfilali01/LoRA-Land-STEMAdapter | alielfilali01 | 2024-03-03T16:08:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-03T13:34:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
huggingfaceaccountyx/es_ar_ru_de | huggingfaceaccountyx | 2024-03-03T16:07:23Z | 13 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:data_folder/esarrude",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2024-03-03T15:26:13Z | ---
license: apache-2.0
base_model: google-bert/bert-base-multilingual-cased
tags:
- generated_from_trainer
datasets:
- data_folder/esarrude
model-index:
- name: es_ar_ru_de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# es_ar_ru_de
This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on the data_folder/esarrude dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 96
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.1+cu121
- Datasets 2.14.5
- Tokenizers 0.15.0
|
NDani/Taxi | NDani | 2024-03-03T16:07:06Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-03-03T16:07:04Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.40 +/- 2.77
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="NDani/Taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
alielfilali01/LoRA-Land-SentimentDetectionAdapter | alielfilali01 | 2024-03-03T16:06:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-03T13:32:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
NDani/frozen | NDani | 2024-03-03T16:03:42Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-03-03T16:03:38Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: frozen
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="NDani/frozen", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
yechenzhi1/rl_course_vizdoom_health_gathering_supreme | yechenzhi1 | 2024-03-03T16:02:04Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-03-03T12:42:09Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 14.05 +/- 6.34
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r yechenzhi1/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
Gordon119/TAT-openai-whisper-large-v2-mix-tag-epoch1-total5epoch | Gordon119 | 2024-03-03T15:58:36Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-02T02:09:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
OwOpeepeepoopoo/gemmerica_6 | OwOpeepeepoopoo | 2024-03-03T15:56:29Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-03T14:58:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
alielfilali01/LoRA-Land-AcademicBenchmarksAdapter | alielfilali01 | 2024-03-03T15:54:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-03T13:13:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bdxyx78/sohellighting | bdxyx78 | 2024-03-03T15:53:48Z | 0 | 0 | null | [
"license:bigscience-bloom-rail-1.0",
"region:us"
]
| null | 2024-03-03T15:10:53Z | ---
license: bigscience-bloom-rail-1.0
---
|
AI4DS/SQLLlama-7b | AI4DS | 2024-03-03T15:42:53Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-03T15:39:51Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ImperialIndians23/RobertaBaseProcessedDownsampledKeywordDropout | ImperialIndians23 | 2024-03-03T15:42:05Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"region:us"
]
| null | 2024-03-03T12:58:51Z | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: RobertaBaseProcessedDownsampledKeywordDropout
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RobertaBaseProcessedDownsampledKeywordDropout
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2807
- Accuracy: 0.8711
- F1: 0.4847
- Precision: 0.3908
- Recall: 0.6382
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: inverse_sqrt
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.5428 | 1.0 | 297 | 0.4049 | 0.9050 | 0.0 | 0.0 | 0.0 |
| 0.3735 | 2.0 | 595 | 0.2807 | 0.8711 | 0.4847 | 0.3908 | 0.6382 |
| 0.3458 | 2.99 | 891 | 0.3110 | 0.8567 | 0.4828 | 0.3675 | 0.7035 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.2+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
linyunchien/blip2-test | linyunchien | 2024-03-03T15:41:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-03T14:46:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zjunlp/llama-molinst-biotext-7b | zjunlp | 2024-03-03T15:37:12Z | 0 | 7 | null | [
"chemistry",
"biology",
"biomolecular text",
"instructions",
"arxiv:2306.08018",
"license:apache-2.0",
"region:us"
]
| null | 2023-06-12T00:46:56Z | ---
license: apache-2.0
tags:
- chemistry
- biology
- biomolecular text
- instructions
---
This repo contains a fully fine-tuned [LLaMA-7b](https://huggingface.co/decapoda-research/llama-7b-hf), trained on the 🥼 **biomolecule text instructions** from the 🧪 [Mol-Instructions](https://huggingface.co/datasets/zjunlp/Mol-Instructions) dataset.
Instructions for running it can be found at https://github.com/zjunlp/Mol-Instructions.
> Please refer to our [paper](https://arxiv.org/abs/2306.08018) for more details.

<h3> 🥼 Tasks</h3>
<details>
<summary><b>Chemical entity recognition</b></summary>
- *Find and list all the instances of the chemical entities in the following content:*
"Both the control and caramiphen groups with double cannulas had significantly shorter latencies to seizure onset than the corresponding groups with single cannula."
```
caramiphen
```
</details>
<details>
<summary><b>Chemical-disease interaction extraction</b></summary>
- *You are provided with a set of clinical trial summaries. Extract the chemical-disease relations from the summaries and present your findings in the format of (Subject, Object):*
"Eating disorders and the associated behavioural problems and drug abuse are uncommon in pregnancy. When they do occur they are often unrecognized because of denial but when significant may pose a risk to both the mother and her fetus. This case illustrates a number of problems that may be encountered in women with eating disorders in pregnancy, including prolonged and recurrent metabolic disturbances and diuretic abuse. In particular it illustrates the derangements of thyroid function seen in pregnant women with eating disorders and reminds us that when a cause for thyrotoxicosis remains obscure, thyroxine abuse should be considered and explored."
```
(thyroxine, thyrotoxicosis)
```
</details>
<details>
<summary><b>Chemical-protein interaction extraction</b></summary>
- *Your task is to extract the subject and object entities from the provided text and establish the corresponding relation triples. Please format your answers as (Subject, Relation, Object):*
"Opioid receptors are the membrane proteins that mediate the pain-relieving effect of opioid drugs, such as morphine and fentanyl as well as endogenous opioid peptides enkephalins and endorphins. Using cDNAs for the mu and the kappa opioid receptors, we mapped the chromosomal locations of their genes in mouse. Multilocus cross analysis located the mu receptor gene Oprm on Chr 10 and the kappa receptor gene Oprk1 on Chr 1. Both genes are near centromere, with no markers more centromeric. These data indicate that the two opioid receptors are different gene products, ruling out the possibility that they may be differential splicing products from the same gene."
```
(morphine, regulator, Opioid receptors), (fentanyl, regulator, Opioid receptors)
```
</details>
<details>
<summary><b>Multiple-choice question</b></summary>
- *Which of the following conditions does not show multifactorial inheritance? What of the following is the right choice?*
(A) Pyloric stenosis (B) Schizophrenia (C) Spina bifida (neural tube defects) (D) Marfan syndrome
```
The final answer is (D).
```
</details>
<details>
<summary><b>True or False question</b></summary>
- *Does feeding tube insertion and its timing improve survival?*
```
No, neither insertion of PEG tubes nor timing of insertion affect survival.
```
</details>
<details>
<summary><b>Open question</b></summary>
- *What virus causes chronic disease?*
```
Hepatitis C virus (HCV).
```
</details>
<h3> 📝 Demo</h3>
As illustrated in [our repository](https://github.com/zjunlp/Mol-Instructions/tree/main/demo), we provide an example to perform generation.
```shell
>> python generate.py \
--CLI True \
--protein False\
--load_8bit \
--base_model $BASE_MODEL_PATH \
--lora_weights $FINETUNED_MODEL_PATH \
```
Please download [llama-7b-hf](https://huggingface.co/decapoda-research/llama-7b-hf/tree/main) to obtain the pre-training weights of LLaMA-7B, refine the `--base_model` to point towards the location where the model weights are saved.
For model fine-tuned on **biomolecular text** instructions, set `$FINETUNED_MODEL_PATH` to `'zjunlp/llama-molinst-biotext-7b'`.
<h3> 🚨 Limitations</h3>
The current state of the model, obtained via instruction tuning, is a preliminary demonstration. Its capacity to handle real-world, production-grade tasks remains limited.
<h3> 📚 References</h3>
If you use our repository, please cite the following related paper:
```
@inproceedings{fang2023mol,
author = {Yin Fang and
Xiaozhuan Liang and
Ningyu Zhang and
Kangwei Liu and
Rui Huang and
Zhuo Chen and
Xiaohui Fan and
Huajun Chen},
title = {Mol-Instructions: {A} Large-Scale Biomolecular Instruction Dataset
for Large Language Models},
booktitle = {{ICLR}},
publisher = {OpenReview.net},
year = {2024},
url = {https://openreview.net/pdf?id=Tlsdsb6l9n}
}
```
<h3> 🫱🏻🫲 Acknowledgements</h3>
We appreciate [LLaMA](https://github.com/facebookresearch/llama), [Huggingface Transformers Llama](https://github.com/huggingface/transformers/tree/main/src/transformers/models/llama), [Alpaca](https://crfm.stanford.edu/2023/03/13/alpaca.html), [Alpaca-LoRA](https://github.com/tloen/alpaca-lora), [Chatbot Service](https://github.com/deep-diver/LLM-As-Chatbot) and many other related works for their open-source contributions.
|
anilerkul/results | anilerkul | 2024-03-03T15:36:30Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/SportsBERT",
"base_model:finetune:microsoft/SportsBERT",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-03-03T11:31:53Z | ---
base_model: microsoft/SportsBERT
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [microsoft/SportsBERT](https://huggingface.co/microsoft/SportsBERT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8218
- Accuracy: 0.7533
- F1: 0.5308
- Precision: 0.5341
- Recall: 0.5609
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 6.3122 | 0.03 | 10 | 6.3037 | 0.0 | 0.0 | 0.0 | 0.0 |
| 6.275 | 0.05 | 20 | 6.2851 | 0.0 | 0.0 | 0.0 | 0.0 |
| 6.2656 | 0.08 | 30 | 6.2558 | 0.0 | 0.0 | 0.0 | 0.0 |
| 6.2088 | 0.1 | 40 | 6.1995 | 0.0015 | 0.0001 | 0.0001 | 0.0008 |
| 6.1868 | 0.13 | 50 | 6.1330 | 0.1196 | 0.0016 | 0.0009 | 0.0058 |
| 6.1126 | 0.16 | 60 | 6.0544 | 0.1315 | 0.0013 | 0.0007 | 0.0056 |
| 6.0563 | 0.18 | 70 | 5.9539 | 0.1315 | 0.0013 | 0.0007 | 0.0056 |
| 5.9162 | 0.21 | 80 | 5.8702 | 0.1315 | 0.0013 | 0.0007 | 0.0056 |
| 5.9922 | 0.23 | 90 | 5.8097 | 0.1315 | 0.0013 | 0.0007 | 0.0056 |
| 5.7775 | 0.26 | 100 | 5.7482 | 0.1315 | 0.0013 | 0.0007 | 0.0056 |
| 5.7196 | 0.29 | 110 | 5.6948 | 0.1315 | 0.0013 | 0.0007 | 0.0056 |
| 5.6704 | 0.31 | 120 | 5.6474 | 0.1315 | 0.0013 | 0.0007 | 0.0056 |
| 5.732 | 0.34 | 130 | 5.6125 | 0.1315 | 0.0013 | 0.0007 | 0.0056 |
| 5.5617 | 0.37 | 140 | 5.5776 | 0.1315 | 0.0013 | 0.0007 | 0.0056 |
| 5.5267 | 0.39 | 150 | 5.5558 | 0.1315 | 0.0013 | 0.0007 | 0.0056 |
| 5.7006 | 0.42 | 160 | 5.5266 | 0.1315 | 0.0013 | 0.0007 | 0.0056 |
| 5.4703 | 0.44 | 170 | 5.4877 | 0.1315 | 0.0013 | 0.0007 | 0.0056 |
| 5.5078 | 0.47 | 180 | 5.4431 | 0.1315 | 0.0013 | 0.0007 | 0.0056 |
| 5.4349 | 0.5 | 190 | 5.4124 | 0.1315 | 0.0013 | 0.0007 | 0.0056 |
| 5.6199 | 0.52 | 200 | 5.3824 | 0.1315 | 0.0013 | 0.0007 | 0.0056 |
| 5.002 | 0.55 | 210 | 5.3414 | 0.1315 | 0.0013 | 0.0007 | 0.0056 |
| 5.2497 | 0.57 | 220 | 5.3030 | 0.1315 | 0.0013 | 0.0007 | 0.0056 |
| 4.8403 | 0.6 | 230 | 5.2723 | 0.1315 | 0.0013 | 0.0007 | 0.0056 |
| 5.2636 | 0.63 | 240 | 5.2070 | 0.1448 | 0.0092 | 0.0131 | 0.0122 |
| 5.3921 | 0.65 | 250 | 5.2232 | 0.1713 | 0.0191 | 0.0192 | 0.0285 |
| 5.3127 | 0.68 | 260 | 5.1045 | 0.1462 | 0.0118 | 0.0120 | 0.0158 |
| 5.0857 | 0.7 | 270 | 5.0514 | 0.1787 | 0.0243 | 0.0304 | 0.0313 |
| 5.0425 | 0.73 | 280 | 4.9835 | 0.1802 | 0.0230 | 0.0238 | 0.0332 |
| 5.0088 | 0.76 | 290 | 4.9355 | 0.1905 | 0.0269 | 0.0283 | 0.0338 |
| 5.0987 | 0.78 | 300 | 4.8663 | 0.2112 | 0.0317 | 0.0360 | 0.0434 |
| 5.0445 | 0.81 | 310 | 4.8652 | 0.2378 | 0.0414 | 0.0369 | 0.0562 |
| 4.928 | 0.84 | 320 | 4.7639 | 0.2230 | 0.0369 | 0.0361 | 0.0487 |
| 4.7661 | 0.86 | 330 | 4.6684 | 0.2482 | 0.0491 | 0.0495 | 0.0606 |
| 4.7903 | 0.89 | 340 | 4.6012 | 0.2806 | 0.0713 | 0.0804 | 0.0929 |
| 4.5534 | 0.91 | 350 | 4.4948 | 0.3087 | 0.0853 | 0.0846 | 0.1086 |
| 4.6951 | 0.94 | 360 | 4.4281 | 0.3146 | 0.0922 | 0.0886 | 0.1136 |
| 4.379 | 0.97 | 370 | 4.4067 | 0.2984 | 0.0881 | 0.0950 | 0.1084 |
| 4.5588 | 0.99 | 380 | 4.3093 | 0.3176 | 0.0865 | 0.0944 | 0.1062 |
| 4.4383 | 1.02 | 390 | 4.2825 | 0.3486 | 0.1070 | 0.1039 | 0.1341 |
| 4.505 | 1.04 | 400 | 4.2196 | 0.3604 | 0.1125 | 0.1124 | 0.1386 |
| 3.7169 | 1.07 | 410 | 4.1832 | 0.3250 | 0.1008 | 0.1166 | 0.1180 |
| 3.7538 | 1.1 | 420 | 4.2051 | 0.4062 | 0.1526 | 0.1565 | 0.2016 |
| 3.5894 | 1.12 | 430 | 4.1349 | 0.3294 | 0.1042 | 0.1156 | 0.1206 |
| 4.045 | 1.15 | 440 | 4.0583 | 0.3855 | 0.1330 | 0.1388 | 0.1624 |
| 3.1886 | 1.17 | 450 | 3.9559 | 0.3944 | 0.1403 | 0.1485 | 0.1799 |
| 3.8633 | 1.2 | 460 | 3.9481 | 0.4092 | 0.1523 | 0.1468 | 0.2027 |
| 3.7127 | 1.23 | 470 | 3.8654 | 0.4151 | 0.1574 | 0.1591 | 0.1974 |
| 3.6555 | 1.25 | 480 | 3.8571 | 0.4210 | 0.1615 | 0.1603 | 0.2015 |
| 3.854 | 1.28 | 490 | 3.8096 | 0.4151 | 0.1542 | 0.1488 | 0.1897 |
| 3.7229 | 1.31 | 500 | 3.7690 | 0.4328 | 0.1716 | 0.1653 | 0.2164 |
| 3.9952 | 1.33 | 510 | 3.7769 | 0.4092 | 0.1638 | 0.1533 | 0.1937 |
| 3.4301 | 1.36 | 520 | 3.7061 | 0.4756 | 0.1968 | 0.1962 | 0.2515 |
| 4.1302 | 1.38 | 530 | 3.6265 | 0.4549 | 0.1919 | 0.1898 | 0.2297 |
| 3.5395 | 1.41 | 540 | 3.5806 | 0.4564 | 0.1917 | 0.1845 | 0.2368 |
| 3.2723 | 1.44 | 550 | 3.4968 | 0.4904 | 0.2195 | 0.2175 | 0.2682 |
| 3.1159 | 1.46 | 560 | 3.4794 | 0.4461 | 0.1967 | 0.2059 | 0.2205 |
| 3.3653 | 1.49 | 570 | 3.4419 | 0.4963 | 0.2216 | 0.2162 | 0.2864 |
| 3.1493 | 1.51 | 580 | 3.3549 | 0.4682 | 0.2123 | 0.2014 | 0.2564 |
| 2.8797 | 1.54 | 590 | 3.3212 | 0.5066 | 0.2594 | 0.2696 | 0.2992 |
| 3.3974 | 1.57 | 600 | 3.4043 | 0.4934 | 0.2324 | 0.2481 | 0.2751 |
| 2.9646 | 1.59 | 610 | 3.2956 | 0.4815 | 0.2238 | 0.2297 | 0.2535 |
| 2.5164 | 1.62 | 620 | 3.2025 | 0.4978 | 0.2191 | 0.2216 | 0.2596 |
| 3.1884 | 1.64 | 630 | 3.2710 | 0.5258 | 0.2607 | 0.2536 | 0.3152 |
| 3.5247 | 1.67 | 640 | 3.2109 | 0.5022 | 0.2348 | 0.2314 | 0.2739 |
| 3.2349 | 1.7 | 650 | 3.1718 | 0.5022 | 0.2391 | 0.2310 | 0.2804 |
| 3.0547 | 1.72 | 660 | 3.1540 | 0.5244 | 0.2693 | 0.2701 | 0.3103 |
| 2.6583 | 1.75 | 670 | 3.1089 | 0.5244 | 0.2491 | 0.2536 | 0.2864 |
| 2.7558 | 1.78 | 680 | 3.0492 | 0.5539 | 0.2897 | 0.2936 | 0.3351 |
| 2.289 | 1.8 | 690 | 3.1638 | 0.5022 | 0.2647 | 0.2837 | 0.2901 |
| 2.993 | 1.83 | 700 | 3.1440 | 0.5598 | 0.3071 | 0.3153 | 0.3659 |
| 3.1635 | 1.85 | 710 | 3.1380 | 0.4712 | 0.2289 | 0.2331 | 0.2569 |
| 3.1843 | 1.88 | 720 | 3.1009 | 0.4919 | 0.2453 | 0.2614 | 0.2718 |
| 2.6742 | 1.91 | 730 | 2.9769 | 0.5598 | 0.2931 | 0.2973 | 0.3383 |
| 2.9256 | 1.93 | 740 | 2.9560 | 0.5554 | 0.2834 | 0.2840 | 0.3348 |
| 2.4105 | 1.96 | 750 | 2.9320 | 0.5495 | 0.3003 | 0.3150 | 0.3446 |
| 2.9523 | 1.98 | 760 | 2.8481 | 0.5539 | 0.3025 | 0.3149 | 0.3452 |
| 2.6287 | 2.01 | 770 | 2.8092 | 0.5746 | 0.3231 | 0.3314 | 0.3631 |
| 2.6051 | 2.04 | 780 | 2.7894 | 0.5761 | 0.3248 | 0.3368 | 0.3639 |
| 1.9671 | 2.06 | 790 | 2.8455 | 0.5716 | 0.3203 | 0.3400 | 0.3522 |
| 2.2805 | 2.09 | 800 | 2.8596 | 0.5687 | 0.3086 | 0.3227 | 0.3452 |
| 2.5447 | 2.11 | 810 | 2.7331 | 0.6086 | 0.3369 | 0.3323 | 0.3932 |
| 2.7499 | 2.14 | 820 | 2.7018 | 0.5997 | 0.3358 | 0.3367 | 0.3867 |
| 2.1172 | 2.17 | 830 | 2.6629 | 0.5982 | 0.3320 | 0.3368 | 0.3762 |
| 2.143 | 2.19 | 840 | 2.6428 | 0.6278 | 0.3565 | 0.3612 | 0.4012 |
| 2.4473 | 2.22 | 850 | 2.5931 | 0.6233 | 0.3622 | 0.3769 | 0.4044 |
| 1.8206 | 2.25 | 860 | 2.5862 | 0.6145 | 0.3554 | 0.3525 | 0.4019 |
| 2.4884 | 2.27 | 870 | 2.5626 | 0.6263 | 0.3631 | 0.3640 | 0.4066 |
| 1.9498 | 2.3 | 880 | 2.5493 | 0.6189 | 0.3530 | 0.3655 | 0.3908 |
| 2.4184 | 2.32 | 890 | 2.5093 | 0.6381 | 0.3699 | 0.3801 | 0.4069 |
| 1.995 | 2.35 | 900 | 2.5456 | 0.6499 | 0.3831 | 0.3924 | 0.4172 |
| 2.4355 | 2.38 | 910 | 2.5306 | 0.6425 | 0.3787 | 0.3879 | 0.4126 |
| 1.8367 | 2.4 | 920 | 2.4947 | 0.6484 | 0.3799 | 0.3811 | 0.4207 |
| 1.6993 | 2.43 | 930 | 2.4694 | 0.6588 | 0.3971 | 0.3966 | 0.4462 |
| 1.6198 | 2.45 | 940 | 2.4768 | 0.6691 | 0.4087 | 0.4091 | 0.4498 |
| 2.0119 | 2.48 | 950 | 2.4319 | 0.6514 | 0.3892 | 0.3983 | 0.4202 |
| 1.3796 | 2.51 | 960 | 2.4279 | 0.6647 | 0.4042 | 0.4188 | 0.4332 |
| 1.5978 | 2.53 | 970 | 2.4716 | 0.6677 | 0.4062 | 0.4048 | 0.4537 |
| 2.26 | 2.56 | 980 | 2.4160 | 0.6736 | 0.4150 | 0.4216 | 0.4459 |
| 1.9445 | 2.58 | 990 | 2.4038 | 0.6750 | 0.4178 | 0.4259 | 0.4529 |
| 1.9551 | 2.61 | 1000 | 2.3866 | 0.6736 | 0.4056 | 0.4077 | 0.4442 |
| 2.052 | 2.64 | 1010 | 2.3938 | 0.6765 | 0.4157 | 0.4100 | 0.4654 |
| 2.0671 | 2.66 | 1020 | 2.4113 | 0.6736 | 0.4158 | 0.4119 | 0.4687 |
| 1.7332 | 2.69 | 1030 | 2.3930 | 0.6706 | 0.3984 | 0.3972 | 0.4432 |
| 1.9113 | 2.72 | 1040 | 2.3661 | 0.6780 | 0.4061 | 0.4035 | 0.4553 |
| 1.7881 | 2.74 | 1050 | 2.3104 | 0.6765 | 0.4149 | 0.4181 | 0.4588 |
| 1.6475 | 2.77 | 1060 | 2.2779 | 0.6824 | 0.4282 | 0.4316 | 0.4735 |
| 1.9959 | 2.79 | 1070 | 2.2720 | 0.6898 | 0.4332 | 0.4354 | 0.4746 |
| 1.5039 | 2.82 | 1080 | 2.2858 | 0.6839 | 0.4273 | 0.4330 | 0.4693 |
| 1.9764 | 2.85 | 1090 | 2.3054 | 0.6780 | 0.4123 | 0.4199 | 0.4555 |
| 1.7056 | 2.87 | 1100 | 2.2503 | 0.6809 | 0.4100 | 0.4185 | 0.4408 |
| 1.4112 | 2.9 | 1110 | 2.2162 | 0.7046 | 0.4379 | 0.4441 | 0.4758 |
| 2.1521 | 2.92 | 1120 | 2.2133 | 0.7046 | 0.4444 | 0.4507 | 0.4827 |
| 1.4928 | 2.95 | 1130 | 2.1953 | 0.7046 | 0.4498 | 0.4584 | 0.4882 |
| 1.5147 | 2.98 | 1140 | 2.1814 | 0.7061 | 0.4485 | 0.4544 | 0.4882 |
| 2.0173 | 3.0 | 1150 | 2.1921 | 0.6957 | 0.4344 | 0.4373 | 0.4792 |
| 1.4601 | 3.03 | 1160 | 2.1690 | 0.6957 | 0.4434 | 0.4473 | 0.4838 |
| 1.3261 | 3.05 | 1170 | 2.1156 | 0.7149 | 0.4656 | 0.4750 | 0.4958 |
| 1.6506 | 3.08 | 1180 | 2.0940 | 0.7149 | 0.4542 | 0.4632 | 0.4857 |
| 1.1869 | 3.11 | 1190 | 2.0919 | 0.7134 | 0.4597 | 0.4590 | 0.5002 |
| 1.4337 | 3.13 | 1200 | 2.1363 | 0.7090 | 0.4560 | 0.4518 | 0.5073 |
| 1.2734 | 3.16 | 1210 | 2.1231 | 0.7090 | 0.4585 | 0.4522 | 0.5095 |
| 1.6794 | 3.19 | 1220 | 2.0523 | 0.7238 | 0.4643 | 0.4660 | 0.4985 |
| 1.5335 | 3.21 | 1230 | 2.0347 | 0.7282 | 0.4611 | 0.4578 | 0.4994 |
| 0.9728 | 3.24 | 1240 | 2.0415 | 0.7253 | 0.4643 | 0.4549 | 0.5103 |
| 1.4616 | 3.26 | 1250 | 2.0451 | 0.7164 | 0.4612 | 0.4522 | 0.5098 |
| 1.2002 | 3.29 | 1260 | 2.0137 | 0.7253 | 0.4793 | 0.4808 | 0.5211 |
| 1.2331 | 3.32 | 1270 | 2.0234 | 0.7267 | 0.4794 | 0.4782 | 0.5223 |
| 1.3334 | 3.34 | 1280 | 2.0507 | 0.7134 | 0.4694 | 0.4721 | 0.5150 |
| 1.2774 | 3.37 | 1290 | 2.0493 | 0.7208 | 0.4813 | 0.4814 | 0.5217 |
| 1.0036 | 3.39 | 1300 | 2.0843 | 0.7179 | 0.4879 | 0.4945 | 0.5166 |
| 1.4289 | 3.42 | 1310 | 2.0544 | 0.7179 | 0.4843 | 0.4859 | 0.5262 |
| 1.0987 | 3.45 | 1320 | 2.0389 | 0.7164 | 0.4822 | 0.4803 | 0.5251 |
| 0.9749 | 3.47 | 1330 | 2.0509 | 0.7194 | 0.4927 | 0.4955 | 0.5334 |
| 1.23 | 3.5 | 1340 | 2.0327 | 0.7297 | 0.4963 | 0.5032 | 0.5260 |
| 1.2873 | 3.52 | 1350 | 2.0175 | 0.7312 | 0.4935 | 0.4969 | 0.5285 |
| 1.3335 | 3.55 | 1360 | 1.9970 | 0.7282 | 0.4955 | 0.4976 | 0.5331 |
| 1.1209 | 3.58 | 1370 | 1.9708 | 0.7326 | 0.4930 | 0.4929 | 0.5307 |
| 1.2895 | 3.6 | 1380 | 1.9480 | 0.7356 | 0.4967 | 0.4926 | 0.5358 |
| 1.3765 | 3.63 | 1390 | 1.9850 | 0.7297 | 0.5002 | 0.4913 | 0.5492 |
| 1.0298 | 3.66 | 1400 | 1.9649 | 0.7312 | 0.5022 | 0.5002 | 0.5443 |
| 1.4707 | 3.68 | 1410 | 1.9589 | 0.7326 | 0.4968 | 0.4995 | 0.5341 |
| 1.5404 | 3.71 | 1420 | 1.9712 | 0.7386 | 0.5056 | 0.5069 | 0.5395 |
| 1.2394 | 3.73 | 1430 | 1.9733 | 0.7386 | 0.5127 | 0.5142 | 0.5469 |
| 1.0191 | 3.76 | 1440 | 1.9696 | 0.7326 | 0.5005 | 0.5031 | 0.5342 |
| 0.8809 | 3.79 | 1450 | 1.9569 | 0.7386 | 0.5076 | 0.5022 | 0.5463 |
| 0.8113 | 3.81 | 1460 | 1.9445 | 0.7386 | 0.4960 | 0.4922 | 0.5313 |
| 0.8888 | 3.84 | 1470 | 1.9434 | 0.7415 | 0.4944 | 0.4934 | 0.5263 |
| 0.9775 | 3.86 | 1480 | 1.9311 | 0.7430 | 0.5001 | 0.4967 | 0.5343 |
| 1.5036 | 3.89 | 1490 | 1.9008 | 0.7445 | 0.5008 | 0.4964 | 0.5368 |
| 1.1425 | 3.92 | 1500 | 1.8969 | 0.7459 | 0.5113 | 0.5075 | 0.5461 |
| 1.0492 | 3.94 | 1510 | 1.8869 | 0.7459 | 0.5057 | 0.5027 | 0.5424 |
| 1.0938 | 3.97 | 1520 | 1.8864 | 0.7489 | 0.5111 | 0.5114 | 0.5432 |
| 1.3599 | 3.99 | 1530 | 1.8901 | 0.7459 | 0.5102 | 0.5071 | 0.5457 |
| 1.0393 | 4.02 | 1540 | 1.8817 | 0.7459 | 0.5145 | 0.5100 | 0.5515 |
| 0.8796 | 4.05 | 1550 | 1.8760 | 0.7430 | 0.5076 | 0.5043 | 0.5456 |
| 0.6769 | 4.07 | 1560 | 1.8813 | 0.7445 | 0.5094 | 0.5078 | 0.5466 |
| 1.1151 | 4.1 | 1570 | 1.8843 | 0.7430 | 0.5127 | 0.5083 | 0.5548 |
| 0.8389 | 4.13 | 1580 | 1.8787 | 0.7489 | 0.5175 | 0.5165 | 0.5556 |
| 0.8193 | 4.15 | 1590 | 1.8831 | 0.7415 | 0.5091 | 0.5064 | 0.5475 |
| 0.9354 | 4.18 | 1600 | 1.8832 | 0.7445 | 0.5086 | 0.5046 | 0.5456 |
| 0.7061 | 4.2 | 1610 | 1.8696 | 0.7445 | 0.5079 | 0.5039 | 0.5455 |
| 0.8033 | 4.23 | 1620 | 1.8655 | 0.7430 | 0.5037 | 0.4979 | 0.5458 |
| 1.0084 | 4.26 | 1630 | 1.8592 | 0.7459 | 0.5053 | 0.4996 | 0.5474 |
| 0.9944 | 4.28 | 1640 | 1.8578 | 0.7474 | 0.5107 | 0.5089 | 0.5464 |
| 0.9228 | 4.31 | 1650 | 1.8606 | 0.7459 | 0.5153 | 0.5178 | 0.5443 |
| 0.9574 | 4.33 | 1660 | 1.8575 | 0.7489 | 0.5194 | 0.5242 | 0.5483 |
| 0.7753 | 4.36 | 1670 | 1.8569 | 0.7489 | 0.5182 | 0.5223 | 0.5483 |
| 0.7223 | 4.39 | 1680 | 1.8525 | 0.7489 | 0.5188 | 0.5194 | 0.5515 |
| 0.8973 | 4.41 | 1690 | 1.8519 | 0.7518 | 0.5194 | 0.5184 | 0.5527 |
| 0.771 | 4.44 | 1700 | 1.8503 | 0.7533 | 0.5282 | 0.5339 | 0.5570 |
| 0.9367 | 4.46 | 1710 | 1.8546 | 0.7533 | 0.5260 | 0.5348 | 0.5527 |
| 1.1453 | 4.49 | 1720 | 1.8503 | 0.7533 | 0.5252 | 0.5335 | 0.5520 |
| 1.1738 | 4.52 | 1730 | 1.8443 | 0.7504 | 0.5263 | 0.5315 | 0.5552 |
| 1.0122 | 4.54 | 1740 | 1.8402 | 0.7548 | 0.5274 | 0.5296 | 0.5578 |
| 1.0207 | 4.57 | 1750 | 1.8371 | 0.7548 | 0.5251 | 0.5273 | 0.5582 |
| 0.8991 | 4.6 | 1760 | 1.8367 | 0.7504 | 0.5225 | 0.5213 | 0.5580 |
| 0.8017 | 4.62 | 1770 | 1.8375 | 0.7489 | 0.5213 | 0.5208 | 0.5579 |
| 0.9423 | 4.65 | 1780 | 1.8384 | 0.7518 | 0.5220 | 0.5217 | 0.5584 |
| 0.8043 | 4.67 | 1790 | 1.8366 | 0.7518 | 0.5241 | 0.5245 | 0.5588 |
| 0.7625 | 4.7 | 1800 | 1.8358 | 0.7504 | 0.5216 | 0.5203 | 0.5566 |
| 0.9742 | 4.73 | 1810 | 1.8344 | 0.7533 | 0.5278 | 0.5299 | 0.5606 |
| 0.8809 | 4.75 | 1820 | 1.8313 | 0.7504 | 0.5292 | 0.5307 | 0.5629 |
| 0.8433 | 4.78 | 1830 | 1.8299 | 0.7504 | 0.5292 | 0.5307 | 0.5629 |
| 0.7195 | 4.8 | 1840 | 1.8282 | 0.7533 | 0.5319 | 0.5342 | 0.5639 |
| 0.7989 | 4.83 | 1850 | 1.8270 | 0.7533 | 0.5316 | 0.5342 | 0.5635 |
| 0.7612 | 4.86 | 1860 | 1.8253 | 0.7563 | 0.5348 | 0.5372 | 0.5666 |
| 0.9571 | 4.88 | 1870 | 1.8240 | 0.7563 | 0.5351 | 0.5373 | 0.5666 |
| 0.7009 | 4.91 | 1880 | 1.8232 | 0.7563 | 0.5351 | 0.5373 | 0.5666 |
| 0.7424 | 4.93 | 1890 | 1.8224 | 0.7533 | 0.5293 | 0.5314 | 0.5609 |
| 1.0661 | 4.96 | 1900 | 1.8218 | 0.7533 | 0.5291 | 0.5315 | 0.5609 |
| 0.9666 | 4.99 | 1910 | 1.8217 | 0.7533 | 0.5291 | 0.5315 | 0.5609 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Tokenizers 0.15.2
|
zjunlp/llama-molinst-molecule-7b | zjunlp | 2024-03-03T15:35:58Z | 0 | 12 | null | [
"chemistry",
"biology",
"molecule",
"instructions",
"arxiv:2306.08018",
"license:apache-2.0",
"region:us"
]
| null | 2023-06-12T00:57:43Z | ---
license: apache-2.0
tags:
- chemistry
- biology
- molecule
- instructions
---
This repo contains a low-rank adapter for [LLaMA-7b](https://huggingface.co/decapoda-research/llama-7b-hf), trained on the 🔬 **molecule-oriented instructions** from the 🧪 [Mol-Instructions](https://huggingface.co/datasets/zjunlp/Mol-Instructions) dataset.
Instructions for running it can be found at https://github.com/zjunlp/Mol-Instructions.
> Please refer to our [paper](https://arxiv.org/abs/2306.08018) for more details.

<h3> 🔬 Tasks</h3>
<details>
<summary><b>Molecule description generation</b></summary>
- *Please give me some details about this molecule:*
[C][C][C][C][C][C][C][C][C][C][C][C][C][C][C][C][C][C][=Branch1][C][=O][O][C@H1][Branch2][Ring1][=Branch1][C][O][C][=Branch1][C][=O][C][C][C][C][C][C][C][C][C][C][C][C][C][C][C][C][O][P][=Branch1][C][=O][Branch1][C][O][O][C][C@@H1][Branch1][=Branch1][C][=Branch1][C][=O][O][N]
```
The molecule is a 3-sn-phosphatidyl-L-serine in which the phosphatidyl acyl groups at positions 1 and 2 are specified as stearoyl and arachidonoyl respectively.
It is functionally related to an arachidonic acid and an octadecanoic acid.
```
</details>
<details>
<summary><b>Description-guided molecule design</b></summary>
- *Create a molecule with the structure as the one described:*
The molecule is a primary arylamine in which an amino functional group is substituted for one of the benzene hydrogens. It is a primary arylamine and a member of anilines.
```
[N][C][=C][C][=C][C][=C][Ring1][=Branch1]
```
</details>
<details>
<summary><b>Forward reaction prediction</b></summary>
- *With the provided reactants and reagents, propose a potential product:*
[O][=N+1][Branch1][C][O-1][C][=C][N][=C][Branch1][C][Cl][C][Branch1][C][I][=C][Ring1][Branch2].[Fe]
```
[N][C][=C][N][=C][Branch1][C][Cl][C][Branch1][C][I][=C][Ring1][Branch2]
```
</details>
<details>
<summary><b>Retrosynthesis</b></summary>
- *Please suggest potential reactants used in the synthesis of the provided product:*
[C][=C][C][C][N][C][=Branch1][C][=O][O][C][Branch1][C][C][Branch1][C][C][C]
```
[C][=C][C][C][N].[C][C][Branch1][C][C][Branch1][C][C][O][C][=Branch1][C][=O][O][C][=Branch1][C][=O][O][C][Branch1][C][C][Branch1][C][C][C]
```
</details>
<details>
<summary><b>Reagent prediction</b></summary>
- *Please provide possible reagents based on the following chemical reaction:*
[C][C][=C][C][=C][Branch1][C][N][C][=N][Ring1][#Branch1].[O][=C][Branch1][C][Cl][C][Cl]>>[C][C][=C][C][=C][Branch1][Branch2][N][C][=Branch1][C][=O][C][Cl][C][=N][Ring1][O]
```
[C][C][C][O][C][Ring1][Branch1].[C][C][N][Branch1][Ring1][C][C][C][C].[O]
```
</details>
<details>
<summary><b>Property prediction</b></summary>
- *Please provide the HOMO energy value for this molecule:*
[C][C][O][C][C][Branch1][C][C][C][Branch1][C][C][C]
```
-0.2482
```
</details>
<h3> 📝 Demo</h3>
As illustrated in [our repository](https://github.com/zjunlp/Mol-Instructions/tree/main/demo), we provide an example to perform generation.
```shell
>> python generate.py \
--CLI True \
--protein False\
--load_8bit \
--base_model $BASE_MODEL_PATH \
--lora_weights $FINETUNED_MODEL_PATH \
```
Please download [llama-7b-hf](https://huggingface.co/decapoda-research/llama-7b-hf/tree/main) to obtain the pre-training weights of LLaMA-7B, refine the `--base_model` to point towards the location where the model weights are saved.
For model fine-tuned on **molecule-oriented** instructions, set `$FINETUNED_MODEL_PATH` to `'zjunlp/llama-molinst-molecule-7b'`.
<h3> 🚨 Limitations</h3>
The current state of the model, obtained via instruction tuning, is a preliminary demonstration. Its capacity to handle real-world, production-grade tasks remains limited.
<h3> 📚 References</h3>
If you use our repository, please cite the following related paper:
```
@inproceedings{fang2023mol,
author = {Yin Fang and
Xiaozhuan Liang and
Ningyu Zhang and
Kangwei Liu and
Rui Huang and
Zhuo Chen and
Xiaohui Fan and
Huajun Chen},
title = {Mol-Instructions: {A} Large-Scale Biomolecular Instruction Dataset
for Large Language Models},
booktitle = {{ICLR}},
publisher = {OpenReview.net},
year = {2024},
url = {https://openreview.net/pdf?id=Tlsdsb6l9n}
}
```
<h3> 🫱🏻🫲🏾 Acknowledgements</h3>
We appreciate [LLaMA](https://github.com/facebookresearch/llama), [Huggingface Transformers Llama](https://github.com/huggingface/transformers/tree/main/src/transformers/models/llama), [Alpaca](https://crfm.stanford.edu/2023/03/13/alpaca.html), [Alpaca-LoRA](https://github.com/tloen/alpaca-lora), [Chatbot Service](https://github.com/deep-diver/LLM-As-Chatbot) and many other related works for their open-source contributions. |
zjunlp/llama2-molinst-biotext-7b | zjunlp | 2024-03-03T15:33:59Z | 0 | 4 | null | [
"chemistry",
"biology",
"molecule",
"instructions",
"arxiv:2306.08018",
"license:apache-2.0",
"region:us"
]
| null | 2023-09-20T06:12:56Z | ---
license: apache-2.0
tags:
- chemistry
- biology
- molecule
- instructions
---
This repo contains a low-rank adapter for [LLaMA2-7b-chat](https://huggingface.co/meta-llama/Llama-2-7b-chat), trained on the 🥼 **biomolecule text instructions** from the 🧪 [Mol-Instructions](https://huggingface.co/datasets/zjunlp/Mol-Instructions) dataset.
Instructions for running it can be found at https://github.com/zjunlp/Mol-Instructions.
> Please refer to our [paper](https://arxiv.org/abs/2306.08018) for more details.

<h3> 🥼 Tasks</h3>
<details>
<summary><b>Chemical entity recognition</b></summary>
- *Find and list all the instances of the chemical entities in the following content:*
"Both the control and caramiphen groups with double cannulas had significantly shorter latencies to seizure onset than the corresponding groups with single cannula."
```
caramiphen
```
</details>
<details>
<summary><b>Chemical-disease interaction extraction</b></summary>
- *You are provided with a set of clinical trial summaries. Extract the chemical-disease relations from the summaries and present your findings in the format of (Subject, Object):*
"Eating disorders and the associated behavioural problems and drug abuse are uncommon in pregnancy. When they do occur they are often unrecognized because of denial but when significant may pose a risk to both the mother and her fetus. This case illustrates a number of problems that may be encountered in women with eating disorders in pregnancy, including prolonged and recurrent metabolic disturbances and diuretic abuse. In particular it illustrates the derangements of thyroid function seen in pregnant women with eating disorders and reminds us that when a cause for thyrotoxicosis remains obscure, thyroxine abuse should be considered and explored."
```
(thyroxine, thyrotoxicosis)
```
</details>
<details>
<summary><b>Chemical-protein interaction extraction</b></summary>
- *Your task is to extract the subject and object entities from the provided text and establish the corresponding relation triples. Please format your answers as (Subject, Relation, Object):*
"Opioid receptors are the membrane proteins that mediate the pain-relieving effect of opioid drugs, such as morphine and fentanyl as well as endogenous opioid peptides enkephalins and endorphins. Using cDNAs for the mu and the kappa opioid receptors, we mapped the chromosomal locations of their genes in mouse. Multilocus cross analysis located the mu receptor gene Oprm on Chr 10 and the kappa receptor gene Oprk1 on Chr 1. Both genes are near centromere, with no markers more centromeric. These data indicate that the two opioid receptors are different gene products, ruling out the possibility that they may be differential splicing products from the same gene."
```
(morphine, regulator, Opioid receptors), (fentanyl, regulator, Opioid receptors)
```
</details>
<details>
<summary><b>Multiple-choice question</b></summary>
- *Which of the following conditions does not show multifactorial inheritance? What of the following is the right choice?*
(A) Pyloric stenosis (B) Schizophrenia (C) Spina bifida (neural tube defects) (D) Marfan syndrome
```
The final answer is (D).
```
</details>
<details>
<summary><b>True or False question</b></summary>
- *Does feeding tube insertion and its timing improve survival?*
```
No, neither insertion of PEG tubes nor timing of insertion affect survival.
```
</details>
<details>
<summary><b>Open question</b></summary>
- *What virus causes chronic disease?*
```
Hepatitis C virus (HCV).
```
</details>
<h3> 📝 Demo</h3>
As illustrated in [our repository](https://github.com/zjunlp/Mol-Instructions/tree/main/demo), we provide an example to perform generation.
```shell
>> python generate.py \
--CLI True \
--protein False\
--load_8bit \
--base_model $BASE_MODEL_PATH \
--lora_weights $FINETUNED_MODEL_PATH \
```
Please download [Llama-2-7b-chat](https://huggingface.co/meta-llama/Llama-2-7b-chat) to obtain the pre-training weights of LlamA-2-7b-chat, refine the `--base_model` to point towards the location where the model weights are saved.
For model fine-tuned on **biomolecular text** instructions, set `$FINETUNED_MODEL_PATH` to `'zjunlp/llama2-molinst-molecule-7b'`.
<h3> 🚨 Limitations</h3>
The current state of the model, obtained via instruction tuning, is a preliminary demonstration. Its capacity to handle real-world, production-grade tasks remains limited.
<h3> 📚 References</h3>
If you use our repository, please cite the following related paper:
```
@inproceedings{fang2023mol,
author = {Yin Fang and
Xiaozhuan Liang and
Ningyu Zhang and
Kangwei Liu and
Rui Huang and
Zhuo Chen and
Xiaohui Fan and
Huajun Chen},
title = {Mol-Instructions: {A} Large-Scale Biomolecular Instruction Dataset
for Large Language Models},
booktitle = {{ICLR}},
publisher = {OpenReview.net},
year = {2024},
url = {https://openreview.net/pdf?id=Tlsdsb6l9n}
}
```
<h3> 🫱🏻🫲🏾 Acknowledgements</h3>
We appreciate [LLaMA-2](https://ai.meta.com/llama), [LLaMA](https://github.com/facebookresearch/llama), [Huggingface Transformers Llama](https://github.com/huggingface/transformers/tree/main/src/transformers/models/llama), [Alpaca](https://crfm.stanford.edu/2023/03/13/alpaca.html), [Alpaca-LoRA](https://github.com/tloen/alpaca-lora), [Chatbot Service](https://github.com/deep-diver/LLM-As-Chatbot) and many other related works for their open-source contributions. |
lucyknada/andysalerno_openchat-nectar-0.5-exl2-6bpw | lucyknada | 2024-03-03T15:15:45Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-03T14:55:57Z | ### exl2 quant (measurement.json included)
---
### original readme below
---
---
license: apache-2.0
base_model: openchat/openchat-3.5-0106
datasets:
- berkeley-nest/Nectar
---
This is openchat/openchat-3.5-0106, tuned with DPO on a subset Nectar. This time with 5000 steps, a full epoch.
Careful attention was paid to make sure the chat template was followed properly.
Data selection and filtering:
- filtered dataset to only include examples with multiple turns, to preserve strength in multi-turn scenarios
- used the 4th ranking response as the "rejected" instead of the 3rd. When I inspected the dataset, I frequently could not find any meaningful difference in quality between the 1st and 3rd ranked responses, so to make the accepted/rejected signal extra clear, I replaced 3rd ranking with 4th ranking.
- I filtered out any examples with "good_natured == False". Why? When I inspected examples with "good_natured == False" in the Nectar dataset, I noticed they frequently include refusals from even the top ranking model. So, counter-intuitively, including "bad natured" entries might actually censor the model *more*, since the top responses (as ranked by GPT-4) to these queries tend to be refusals. Not to mention, the quality of the conversations that are "bad natured" tends to be worse in general, in my own opinion.
Differences from 0.4:
- Trained on 5000 steps instead of 500, with a lower learning rate and slower warmup period.
Summary of versions:
**[openchat-nectar-0.1](https://huggingface.co/andysalerno/openchat-nectar-0.1)**
- 200 steps, no filtering on Nectar dataset, 5e-5 learning rate
**[openchat-nectar-0.2](https://huggingface.co/andysalerno/openchat-nectar-0.2)**
- empty repo, failed training. ignore it
**[openchat-nectar-0.3](https://huggingface.co/andysalerno/openchat-nectar-0.3)**
- 500 steps, no filtering on Nectar dataset, 5e-5 learning rate (same as 1 but with more steps)
**[openchat-nectar-0.4](https://huggingface.co/andysalerno/openchat-nectar-0.4)**
- 500 steps, filtered dataset to only include multi-chat-turn examples, used 4th ranking response as the "rejected" instead of 3rd, filtered out "good_natured=False", 5e-5 learning rate
**[openchat-nectar-0.5](https://huggingface.co/andysalerno/openchat-nectar-0.5)**
- 5000 steps (over a full epoch), filtered dataset to only include multi-chat-turn examples, used 4th ranking response as the "rejected" instead of 3rd, filtered out "good_natured=False", 5e-6 learning rate. Same as 0.4 but with 10x the steps, and 1/10th the learning rate
**[openchat-nectar-0.6](https://huggingface.co/andysalerno/openchat-nectar-0.6)**
- 500 steps, filtered dataset to only include multi-chat-turn examples, used 4th ranking response as the "rejected" instead of 3rd, filtered out "good_natured=False", 5e-5 learning rate. Same as 0.5 but with 1/10th the steps, and 10x the learning rate
|
DrishtiSharma/gemma-7b-it-dolly-15k-english-brainstorming | DrishtiSharma | 2024-03-03T15:07:44Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:google/gemma-7b-it",
"base_model:adapter:google/gemma-7b-it",
"license:other",
"region:us"
]
| null | 2024-03-03T14:39:04Z | ---
license: other
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: google/gemma-7b-it
model-index:
- name: gemma-7b-it-dolly-15k-english-brainstorming
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma-7b-it-dolly-15k-english-brainstorming
This model is a fine-tuned version of [google/gemma-7b-it](https://huggingface.co/google/gemma-7b-it) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3974
- Rouge Scores: {'rouge1': 0.8672818649395687, 'rouge2': 0.6454332350275582, 'rougeL': 0.6351345254871303, 'rougeLsum': 0.8672626398857906}
- Bleu Scores: [0.8956480474494563, 0.8706364987273697, 0.8304269359390679, 0.785372823061285]
- Gen Len: 170.6158
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge Scores | Bleu Scores | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:---------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------:|:--------:|
| 2.3012 | 1.0 | 398 | 2.0026 | {'rouge1': 0.8605967894222545, 'rouge2': 0.636131431928223, 'rougeL': 0.6291428969983008, 'rougeLsum': 0.860624331598749} | [0.8793248327807496, 0.8543278363738773, 0.8139923136639805, 0.7687265678282116] | 170.5932 |
| 1.2054 | 2.0 | 796 | 2.0260 | {'rouge1': 0.8587397434055452, 'rouge2': 0.6353005312218787, 'rougeL': 0.6345388735413529, 'rougeLsum': 0.8588818459220777} | [0.874882697664012, 0.8507131493229504, 0.8111205664503656, 0.7665706439816697] | 170.6271 |
| 0.519 | 3.0 | 1194 | 2.3974 | {'rouge1': 0.8672818649395687, 'rouge2': 0.6454332350275582, 'rougeL': 0.6351345254871303, 'rougeLsum': 0.8672626398857906} | [0.8956480474494563, 0.8706364987273697, 0.8304269359390679, 0.785372823061285] | 170.6158 |
### Framework versions
- PEFT 0.9.1.dev0
- Transformers 4.39.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.18.1.dev0
- Tokenizers 0.15.2 |
Subsets and Splits