modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
RachidAR/AFlow-SegMoe-1Bx3-v0.1 | RachidAR | 2024-02-07T11:55:35Z | 6 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-1.5",
"moe",
"segmoe",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-02-07T11:10:40Z | ---
license: apache-2.0
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- safetensors
- stable-diffusion-1.5
- moe
- segmoe
language:
- en
library_name: diffusers
---
## Warning
This is an experimental model. It works only with segmoe library!
## Experts
- source_model: Lykon/dreamshaper-8 (base)
- source_model: Lykon/AAM_AnyLora_AnimeMix
- source_model: stablediffusionapi/realistic-vision-51
## Usage
This model can be used via the [segmoe](https://github.com/segmind/segmoe) library.
Make sure to install segmoe by running
```bash
pip install segmoe
```
```python
from segmoe import SegMoEPipeline
pipeline = SegMoEPipeline("RachidAR/AFlow-SegMoe-1Bx3-v0.1", device = "cuda", safety_checker = None)
prompt = "cosmic canvas, orange city background, painting of a chubby cat"
negative_prompt = "nsfw, bad quality, worse quality"
img = pipeline(
prompt=prompt,
negative_prompt=negative_prompt,
height=1024,
width=1024,
num_inference_steps=25,
guidance_scale=7.5,
).images[0]
img.save("image.png")
```


 |
AIJUUD/juud-Mistral-7B-dpo | AIJUUD | 2024-02-07T11:47:45Z | 3,520 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-07T10:29:25Z | ---
library_name: transformers
license: apache-2.0
language:
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
alexgastev/Reinforce-CartPole-v1 | alexgastev | 2024-02-07T11:46:53Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-07T11:46:43Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
iDSLR/DeepSilence-Harad-zero-peft-1.3B | iDSLR | 2024-02-07T11:40:49Z | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:42dot/42dot_LLM-PLM-1.3B",
"base_model:adapter:42dot/42dot_LLM-PLM-1.3B",
"region:us"
] | null | 2024-02-04T16:48:25Z | ---
library_name: peft
base_model: 42dot/42dot_LLM-PLM-1.3B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 |
PranavInvenics/phi2_v3 | PranavInvenics | 2024-02-07T11:36:16Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"autotrain",
"conversational",
"custom_code",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-07T10:41:05Z | ---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
formatec/casenet-tuned-4 | formatec | 2024-02-07T11:32:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-07T11:30:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
iamhack/distilhubert-finetuned-ks-ob | iamhack | 2024-02-07T11:27:15Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:audiofolder",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2024-02-07T10:29:50Z | ---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- audiofolder
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-ks-ob
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: audiofolder
type: audiofolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9998775760048969
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-ks-ob
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0033
- Accuracy: 0.9999
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1462 | 1.0 | 191 | 0.1376 | 0.9731 |
| 0.0317 | 2.0 | 383 | 0.0206 | 0.9969 |
| 0.0112 | 3.0 | 574 | 0.0078 | 0.9990 |
| 0.0062 | 4.0 | 766 | 0.0040 | 0.9998 |
| 0.0063 | 4.99 | 955 | 0.0033 | 0.9999 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
wahdan99/a2c-PandaReachDense-v3 | wahdan99 | 2024-02-07T11:22:23Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-07T11:18:42Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.21 +/- 0.07
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
UnaiGurbindo/speecht5_finetuned_voxpopuli_es | UnaiGurbindo | 2024-02-07T11:20:40Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"lt",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2024-02-07T10:51:49Z | ---
language:
- lt
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: speec T5 LT - Unai Gurbindo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speec T5 LT - Unai Gurbindo
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the Vox Populi LT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4978
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.6231 | 12.7 | 100 | 0.5834 |
| 0.5691 | 25.4 | 200 | 0.5259 |
| 0.5381 | 38.1 | 300 | 0.5030 |
| 0.5306 | 50.79 | 400 | 0.5016 |
| 0.521 | 63.49 | 500 | 0.4978 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
Krithiik/t5-base-gloss-to-sentence | Krithiik | 2024-02-07T11:17:10Z | 4 | 0 | transformers | [
"transformers",
"tf",
"safetensors",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-07T11:15:41Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: t5-base-gloss-to-sentence
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# t5-base-gloss-to-sentence
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.37.0
- TensorFlow 2.15.0
- Datasets 2.1.0
- Tokenizers 0.15.1
|
athmurikarthik/videomae-base-action_detection | athmurikarthik | 2024-02-07T11:16:11Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2024-02-06T10:19:23Z | ---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-action_detection
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-action_detection
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2662
- Accuracy: 0.7243
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 15200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0956 | 0.02 | 305 | 1.3464 | 0.4774 |
| 0.683 | 1.02 | 610 | 2.3774 | 0.3704 |
| 0.5519 | 2.02 | 915 | 2.1501 | 0.3128 |
| 1.5863 | 3.02 | 1220 | 2.7112 | 0.2387 |
| 0.8028 | 4.02 | 1525 | 1.5204 | 0.7037 |
| 1.1797 | 5.02 | 1830 | 2.6479 | 0.2963 |
| 1.185 | 6.02 | 2135 | 0.8982 | 0.7860 |
| 0.9516 | 7.02 | 2440 | 1.2030 | 0.6008 |
| 0.5755 | 8.02 | 2745 | 0.8003 | 0.8189 |
| 0.6815 | 9.02 | 3050 | 2.3653 | 0.4198 |
| 1.1649 | 10.02 | 3355 | 3.0645 | 0.4403 |
| 1.1024 | 11.02 | 3660 | 2.4187 | 0.4321 |
| 1.1158 | 12.02 | 3965 | 2.2631 | 0.5597 |
| 0.2375 | 13.02 | 4270 | 2.2977 | 0.5432 |
| 0.7445 | 14.02 | 4575 | 1.0086 | 0.7860 |
| 0.6555 | 15.02 | 4880 | 0.7161 | 0.8560 |
| 0.8807 | 16.02 | 5185 | 1.2404 | 0.6584 |
| 1.0477 | 17.02 | 5490 | 1.6849 | 0.6173 |
| 0.498 | 18.02 | 5795 | 2.0557 | 0.5844 |
| 0.5536 | 19.02 | 6100 | 2.0703 | 0.5967 |
| 0.2232 | 20.02 | 6405 | 2.7690 | 0.4856 |
| 0.5589 | 21.02 | 6710 | 0.9549 | 0.7243 |
| 0.3377 | 22.02 | 7015 | 0.6488 | 0.8189 |
| 0.7096 | 23.02 | 7320 | 1.6638 | 0.5556 |
| 0.1201 | 24.02 | 7625 | 1.6283 | 0.5761 |
| 0.136 | 25.02 | 7930 | 1.4397 | 0.5926 |
| 0.2558 | 26.02 | 8235 | 1.7421 | 0.5350 |
| 0.3245 | 27.02 | 8540 | 1.2982 | 0.6132 |
| 0.0029 | 28.02 | 8845 | 1.0594 | 0.7202 |
| 0.3272 | 29.02 | 9150 | 1.0833 | 0.8272 |
| 0.0841 | 30.02 | 9455 | 1.3230 | 0.5926 |
| 0.5595 | 31.02 | 9760 | 2.5545 | 0.5844 |
| 0.0837 | 32.02 | 10065 | 1.5960 | 0.6296 |
| 0.0127 | 33.02 | 10370 | 1.8149 | 0.5720 |
| 0.3622 | 34.02 | 10675 | 2.4455 | 0.4938 |
| 0.0006 | 35.02 | 10980 | 1.6700 | 0.6461 |
| 0.0027 | 36.02 | 11285 | 2.2488 | 0.5720 |
| 0.0544 | 37.02 | 11590 | 2.6388 | 0.5514 |
| 0.2504 | 38.02 | 11895 | 1.5352 | 0.6379 |
| 0.0149 | 39.02 | 12200 | 2.2851 | 0.5391 |
| 0.4035 | 40.02 | 12505 | 1.8876 | 0.5556 |
| 0.0008 | 41.02 | 12810 | 2.4479 | 0.5473 |
| 0.3176 | 42.02 | 13115 | 2.0729 | 0.6049 |
| 0.0007 | 43.02 | 13420 | 1.5171 | 0.6255 |
| 0.3948 | 44.02 | 13725 | 1.4067 | 0.6132 |
| 0.0016 | 45.02 | 14030 | 1.0621 | 0.7325 |
| 0.2173 | 46.02 | 14335 | 1.5515 | 0.6132 |
| 0.0007 | 47.02 | 14640 | 1.2523 | 0.7284 |
| 0.2819 | 48.02 | 14945 | 1.5618 | 0.6461 |
| 0.0004 | 49.02 | 15200 | 1.2662 | 0.7243 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
OctavianB/MistralRo | OctavianB | 2024-02-07T10:55:24Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-07T10:55:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
surya47/medclip-roco | surya47 | 2024-02-07T10:54:57Z | 2 | 2 | transformers | [
"transformers",
"jax",
"hybrid-clip",
"medical",
"code",
"visual-question-answering",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | visual-question-answering | 2024-02-07T05:26:24Z | ---
license: apache-2.0
metrics:
- accuracy
pipeline_tag: visual-question-answering
tags:
- medical
- code
--- |
dariolopez/Llama-2-databricks-dolly-oasst1-es-axolotl-GGUF | dariolopez | 2024-02-07T10:52:06Z | 0 | 0 | null | [
"es",
"license:apache-2.0",
"region:us"
] | null | 2023-09-05T07:40:24Z | ---
license: apache-2.0
language:
- es
---
Llama 2 (7B) fine-tuned on a [own Spanish instructions dataset](https://huggingface.co/datasets/dariolopez/Llama-2-databricks-dolly-oasst1-es).
On this repo you can find 4-bit and 5-bit quantized versions of the [Llama 2 (7B) Spanish fine-tuned](https://huggingface.co/dariolopez/Llama-2-databricks-dolly-oasst1-es-axolotl).
# How to use
```sh
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp && git pull && make clean && make
git clone https://huggingface.co/dariolopez/Llama-2-databricks-dolly-oasst1-es-axolotl-GGUF
./main -m ./llama-2-databricks-dolly-oasst1-es-axolotl.gguf.q4_k_m.bin -n 2048 --color --temp 0 -ngl 35 -p "<s>[INST] Describe 5 lugares para visitar en España: [/INST]"
```
# Based on
https://mlabonne.github.io/blog/posts/Quantize_Llama_2_models_using_ggml.html |
llmware/slim-nli | llmware | 2024-02-07T10:45:05Z | 13 | 7 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-08T20:35:04Z | ---
license: apache-2.0
inference: false
---
# SLIM-NLI
<!-- Provide a quick summary of what the model is/does. -->
**slim-nli** is part of the SLIM ("**S**tructured **L**anguage **I**nstruction **M**odel") model series, consisting of small, specialized decoder-based models, fine-tuned for function-calling.
slim-nli has been fine-tuned for **natural language inference (nli)** function calls, generating output consisting of a python dictionary corresponding to specified keys, e.g.:
`{"evidence": ["contradicts"]}`
SLIM models are designed to generate structured outputs that can be used programmatically as part of a multi-step, multi-model LLM-based automation workflow.
Each slim model has a 'quantized tool' version, e.g., [**'slim-nli-tool'**](https://huggingface.co/llmware/slim-nli-tool).
## Prompt format:
`function = "classify"`
`params = "nli"`
`prompt = "<human> " + {text} + "\n" + `
`"<{function}> " + {params} + "</{function}>" + "\n<bot>:"`
<details>
<summary>Transformers Script </summary>
model = AutoModelForCausalLM.from_pretrained("llmware/slim-nli")
tokenizer = AutoTokenizer.from_pretrained("llmware/slim-nli")
function = "classify"
params = "evidence"
# expects two statements - the first is evidence, and the second is a conclusion
text1 = "The stock market declined yesterday as investors worried increasingly about the slowing economy."
text2 = "Investors are positive about the market."
# the two statements are concatenated with optional/helpful "Evidence: " and "Conclusion: " added
text = "Evidence: " + text1 + "\n" + "Conclusion: " + text2
prompt = "<human>: " + text + "\n" + f"<{function}> {params} </{function}>\n<bot>:"
inputs = tokenizer(prompt, return_tensors="pt")
start_of_input = len(inputs.input_ids[0])
outputs = model.generate(
inputs.input_ids.to('cpu'),
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.eos_token_id,
do_sample=True,
temperature=0.3,
max_new_tokens=100
)
output_only = tokenizer.decode(outputs[0][start_of_input:], skip_special_tokens=True)
print("output only: ", output_only)
# here's the fun part
try:
output_only = ast.literal_eval(llm_string_output)
print("success - converted to python dictionary automatically")
except:
print("fail - could not convert to python dictionary automatically - ", llm_string_output)
</details>
<details>
<summary>Using as Function Call in LLMWare</summary>
from llmware.models import ModelCatalog
slim_model = ModelCatalog().load_model("llmware/slim-nli")
# input text - expects two statements - the first is evidence, and the second is a conclusion
text1 = "The stock market declined yesterday as investors worried increasingly about the slowing economy."
text2 = "Investors are positive about the market."
text = "Evidence: " + text1 + "\n" + "Conclusion: " + text2
response = slim_model.function_call(text,params=["evidence"], function="classify")
print("llmware - llm_response: ", response)
</details>
## Model Card Contact
Darren Oberst & llmware team
[Join us on Discord](https://discord.gg/MhZn5Nc39h)
|
shidowake/cyber2chat-7B-base-bnb-4bit | shidowake | 2024-02-07T10:44:45Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-02-07T10:42:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
erfanvaredi/results | erfanvaredi | 2024-02-07T10:41:40Z | 6 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-02-07T10:24:32Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.1
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.3
- num_epochs: 10
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.0.dev0
- Pytorch 2.2.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1 |
llmware/slim-ratings-tool | llmware | 2024-02-07T10:37:33Z | 71 | 3 | transformers | [
"transformers",
"gguf",
"llama",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-24T17:03:40Z | ---
license: apache-2.0
---
# SLIM-RATINGS
<!-- Provide a quick summary of what the model is/does. -->
**slim-ratings-tool** is a 4_K_M quantized GGUF version of slim-sentiment, providing a small, fast inference implementation, optimized for multi-model concurrent deployment.
[**slim-ratings**](https://huggingface.co/llmware/slim-ratings) is part of the SLIM ("**S**tructured **L**anguage **I**nstruction **M**odel") series, providing a set of small, specialized decoder-based LLMs, fine-tuned for function-calling.
To pull the model via API:
from huggingface_hub import snapshot_download
snapshot_download("llmware/slim-ratings-tool", local_dir="/path/on/your/machine/", local_dir_use_symlinks=False)
Load in your favorite GGUF inference engine, or try with llmware as follows:
from llmware.models import ModelCatalog
# to load the model and make a basic inference
model = ModelCatalog().load_model("slim-ratings-tool")
response = model.function_call(text_sample)
# this one line will download the model and run a series of tests
ModelCatalog().tool_test_run("slim-ratings-tool", verbose=True)
Slim models can also be loaded even more simply as part of a multi-model, multi-step LLMfx calls:
from llmware.agents import LLMfx
llm_fx = LLMfx()
llm_fx.load_tool("ratings")
response = llm_fx.ratings(text)
Note: please review [**config.json**](https://huggingface.co/llmware/slim-ratings-tool/blob/main/config.json) in the repository for prompt wrapping information, details on the model, and full test set.
## Model Card Contact
Darren Oberst & llmware team
[Any questions? Join us on Discord](https://discord.gg/MhZn5Nc39h)
|
aanaya/rare-puppers | aanaya | 2024-02-07T10:37:32Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-02-07T09:46:25Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.21568627655506134
---
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Abelmoschus esculentus leaves

#### Cannabis sativa leaves

#### Crotalaria juncea leaves

#### Jatropha multifida leaves

#### Tagetes minuta leaves
 |
llmware/slim-intent-tool | llmware | 2024-02-07T10:24:20Z | 70 | 4 | transformers | [
"transformers",
"gguf",
"llama",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-02-04T21:55:25Z | ---
license: apache-2.0
---
# SLIM-INTENT-TOOL
<!-- Provide a quick summary of what the model is/does. -->
**slim-intent-tool** is a 4_K_M quantized GGUF version of slim-intent, providing a small, fast inference implementation, optimized for multi-model concurrent deployment.
[**slim-intent**](https://huggingface.co/llmware/slim-intent) is part of the SLIM ("**S**tructured **L**anguage **I**nstruction **M**odel") series, providing a set of small, specialized decoder-based LLMs, fine-tuned for function-calling.
To pull the model via API:
from huggingface_hub import snapshot_download
snapshot_download("llmware/slim-intent-tool", local_dir="/path/on/your/machine/", local_dir_use_symlinks=False)
Load in your favorite GGUF inference engine, or try with llmware as follows:
from llmware.models import ModelCatalog
# to load the model and make a basic inference
model = ModelCatalog().load_model("slim-intent-tool")
response = model.function_call(text_sample)
# this one line will download the model and run a series of tests
ModelCatalog().tool_test_run("slim-intent-tool", verbose=True)
Slim models can also orchestrated as part of a multi-model, multi-step LLMfx calls:
from llmware.agents import LLMfx
llm_fx = LLMfx()
llm_fx.load_tool("intent")
response = llm_fx.intent(text)
Note: please review [**config.json**](https://huggingface.co/llmware/slim-intent-tool/blob/main/config.json) in the repository for prompt wrapping information, details on the model, and full test set.
## Model Card Contact
Darren Oberst & llmware team
[Any questions? Join us on Discord](https://discord.gg/MhZn5Nc39h)
|
TENRO/Shizuku_Infinity_XX | TENRO | 2024-02-07T10:22:39Z | 0 | 0 | null | [
"region:us"
] | null | 2024-02-04T07:40:31Z | AItuberしずくちゃんのLoRAです。
anything-v4.0系のモデルで作成したものですので、左記モデルやそのマージモデルと相性が良いと思われます。サンプル画像では、VAEに関してもanything-v4.0用のものを使用しています。
LoRAの強度は0.9程度が良いようです。以下にサンプル画像用のプロンプトを提示します。
<lora:Shizuku_Infinity_XX:0.9>, 1girl, solo, milky white hair, ahoge, big bow on the head, headphones, Beautiful detailed gemological eyes, smile, open mouth, upper body,
EasyNegative, ng_deepnegative_v1_75t, verybadimagenegative_v1.3, (negative_hand:1.2), (negative_hand-neg:1.2), (black hair:1.4), (red hair:1.4), (@ @:1.4), (underwear:1.7), (nude:1.7),
(worst quality:1.2), (bad quality:1.2), (extra fingers:1.2), (deformed hands:1.2), (bad hands:1.2), (missing fingers:1.2), (over 6 fingers:1.2), (split fingers:1.2), (interlocked fingers:1.2), text, navel, teeth,


|
llmware/slim-intent | llmware | 2024-02-07T10:20:35Z | 11 | 9 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-04T21:54:57Z | ---
license: apache-2.0
inference: false
---
# SLIM-INTENT
<!-- Provide a quick summary of what the model is/does. -->
**slim-intent** is part of the SLIM ("**S**tructured **L**anguage **I**nstruction **M**odel") model series, consisting of small, specialized decoder-based models, fine-tuned for function-calling.
slim-intent has been fine-tuned for **intent analysis** function calls, generating output consisting of a python dictionary corresponding to specified keys, e.g.:
`{"intent": ["complaint"]}`
SLIM models are designed to generate structured output that can be used programmatically as part of a multi-step, multi-model LLM-based automation workflow.
Each slim model has a 'quantized tool' version, e.g., [**'slim-intent-tool'**](https://huggingface.co/llmware/slim-intent-tool).
## Prompt format:
`function = "classify"`
`params = "intent"`
`prompt = "<human> " + {text} + "\n" + `
`"<{function}> " + {params} + "</{function}>" + "\n<bot>:"`
<details>
<summary>Transformers Script </summary>
model = AutoModelForCausalLM.from_pretrained("llmware/slim-intent")
tokenizer = AutoTokenizer.from_pretrained("llmware/slim-intent")
function = "classify"
params = "intent"
text = "I am really impressed with the quality of the product and the service that I have received so far."
prompt = "<human>: " + text + "\n" + f"<{function}> {params} </{function}>\n<bot>:"
inputs = tokenizer(prompt, return_tensors="pt")
start_of_input = len(inputs.input_ids[0])
outputs = model.generate(
inputs.input_ids.to('cpu'),
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.eos_token_id,
do_sample=True,
temperature=0.3,
max_new_tokens=100
)
output_only = tokenizer.decode(outputs[0][start_of_input:], skip_special_tokens=True)
print("output only: ", output_only)
# here's the fun part
try:
output_only = ast.literal_eval(llm_string_output)
print("success - converted to python dictionary automatically")
except:
print("fail - could not convert to python dictionary automatically - ", llm_string_output)
</details>
<details>
<summary>Using as Function Call in LLMWare</summary>
from llmware.models import ModelCatalog
slim_model = ModelCatalog().load_model("llmware/slim-intent")
response = slim_model.function_call(text,params=["intent"], function="classify")
print("llmware - llm_response: ", response)
</details>
## Model Card Contact
Darren Oberst & llmware team
[Join us on Discord](https://discord.gg/MhZn5Nc39h)
|
reginaldcoghlan/qa | reginaldcoghlan | 2024-02-07T10:19:12Z | 0 | 0 | null | [
"region:us"
] | null | 2024-02-07T10:16:17Z | In today's digital landscape, the reliability, functionality, and performance of software are paramount to business success. At https://inoxoft.com/service/qa-consulting/, we specialize in revolutionizing your approach to testing, ensuring your products meet exemplary quality standards every step of the way. Our QA consulting services are designed to enhance efficiency, elevate user experience, and propel your business toward greater heights.
As an ISO 27001 certified company and esteemed Microsoft Gold Partner, Google Cloud Partner, ISTQB Silver Partner, and recognized member of Clutch Firms that Deliver and Pangea, we bring unparalleled expertise to every project. Proud members of the Lviv IT Cluster, we are committed to setting industry standards and exceeding client expectations.
Our comprehensive suite of Quality Assurance consulting services includes:
Test Engineering:
Our seasoned software QA consultants craft and implement robust testing frameworks tailored to your project's unique requirements. From identifying and addressing defects to verifying system performance, we cover all functional and non-functional aspects with precision.
Test Management:
Ensure seamless planning, execution, and delivery of QA activities throughout your project lifecycle. Our specialists align testing processes with your company goals, objectives, and quality standards, monitoring progress, and addressing issues proactively.
Test Governance & Compliance:
Navigating industries with stringent regulations such as healthcare, finance, and government, we define policies, procedures, and guidelines to ensure compliance. Our quality control measures mitigate risks and ensure timely addressing of compliance-related challenges.
QA Audit and Improvement:
We analyze your existing QA processes to identify areas for improvement, streamlining workflows, and enhancing efficiency. Leveraging automation and continuous integration practices, we optimize your testing processes for maximum efficacy.
Pre-certification QA:
Prepare your software products for certification and compliance with industry standards and regulations. Our comprehensive assessments, gap analyses, and mock audits ensure your solution meets the necessary criteria. |
llmware/slim-category | llmware | 2024-02-07T10:13:01Z | 9 | 6 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-02T17:07:38Z | ---
license: apache-2.0
inference: false
---
# SLIM-CATEGORY
<!-- Provide a quick summary of what the model is/does. -->
**slim-category** is part of the SLIM ("**S**tructured **L**anguage **I**nstruction **M**odel") model series, consisting of small, specialized decoder-based models, fine-tuned for function-calling.
slim-category has been fine-tuned for **category topic analysis** function calls, generating output consisting of a python dictionary corresponding to specified keys, e.g.:
`{"category": ["markets"]}`
SLIM models are designed to generate structured outputs that can be used programmatically as part of a multi-step, multi-model LLM-based automation workflow.
Each slim model has a 'quantized tool' version, e.g., [**'slim-category-tool'**](https://huggingface.co/llmware/slim-category-tool).
## Prompt format:
`function = "classify"`
`params = "category"`
`prompt = "<human> " + {text} + "\n" + `
`"<{function}> " + {params} + "</{function}>" + "\n<bot>:"`
<details>
<summary>Transformers Script </summary>
model = AutoModelForCausalLM.from_pretrained("llmware/slim-category")
tokenizer = AutoTokenizer.from_pretrained("llmware/slim-category")
function = "classify"
params = "category"
text = "The stock market declined yesterday as investors worried increasingly about the slowing economy."
prompt = "<human>: " + text + "\n" + f"<{function}> {params} </{function}>\n<bot>:"
inputs = tokenizer(prompt, return_tensors="pt")
start_of_input = len(inputs.input_ids[0])
outputs = model.generate(
inputs.input_ids.to('cpu'),
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.eos_token_id,
do_sample=True,
temperature=0.3,
max_new_tokens=100
)
output_only = tokenizer.decode(outputs[0][start_of_input:], skip_special_tokens=True)
print("output only: ", output_only)
# here's the fun part
try:
output_only = ast.literal_eval(llm_string_output)
print("success - converted to python dictionary automatically")
except:
print("fail - could not convert to python dictionary automatically - ", llm_string_output)
</details>
<details>
<summary>Using as Function Call in LLMWare</summary>
from llmware.models import ModelCatalog
slim_model = ModelCatalog().load_model("llmware/slim-category")
response = slim_model.function_call(text,params=["category"], function="classify")
print("llmware - llm_response: ", response)
</details>
## Model Card Contact
Darren Oberst & llmware team
[Join us on Discord](https://discord.gg/MhZn5Nc39h)
|
ramsi-k/poca-SoccerTwos | ramsi-k | 2024-02-07T10:12:56Z | 3 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | 2024-02-07T10:11:56Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ramsi-k/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Arozhada/dqn-SpaceInvadersNoFrameskip-v4 | Arozhada | 2024-02-07T10:08:15Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-07T10:07:40Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 660.00 +/- 215.20
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Arozhada -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Arozhada -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Arozhada
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
chenhaodev/solar-10b-ocn-v1 | chenhaodev | 2024-02-07T10:01:49Z | 3 | 1 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:upstage/SOLAR-10.7B-v1.0",
"base_model:adapter:upstage/SOLAR-10.7B-v1.0",
"license:other",
"region:us"
] | null | 2024-02-07T09:12:23Z | ---
license: other
library_name: peft
tags:
- llama-factory
- lora
- generated_from_trainer
base_model: upstage/SOLAR-10.7B-v1.0
model-index:
- name: solar-10b-ocn-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# solar-10b-ocn-v1
This model is a fine-tuned version of upstage/SOLAR-10.7B-v1.0 on the oncc_medqa_instruct dataset.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training script
CUDA_VISIBLE_DEVICES=0 python src/train_bash.py --stage sft --do_train True --model_name_or_path upstage/SOLAR-10.7B-v1.0 --template solar --finetuning_type lora --quantization_bit 4 --flash_attn True --dataset_dir data --dataset oncc_medqa_instruct --cutoff_len 1024 --learning_rate 0.0005 --num_train_epochs 1.0 --max_samples 5000 --per_device_train_batch_size 4 --gradient_accumulation_steps 4 --lr_scheduler_type cosine --max_grad_norm 1.0 --logging_steps 10 --save_steps 100 --warmup_steps 10 --neftune_noise_alpha 0.5 --lora_rank 8 --lora_dropout 0.2 --lora_target wqkv --output_dir /workspace/solar-10b-ocn-v1 --fp16 True --plot_loss True
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.1.1+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
### Performance
Test script:
lm_eval --model hf --model_args pretrained=upstage/SOLAR-10.7B-v1.0,peft=chenhugging/solar-10b-ocn-v1,trust_remote_code=True,parallelize=True,load_in_4bit=True --tasks ocn,aocnp,medmcqa,pubmedqa,mmlu_clinical_knowledge,mmlu_college_medicine,mmlu_professional_medicine --device cuda:0 --limit 100
hf (pretrained=upstage/SOLAR-10.7B-v1.0,peft=chenhugging/solar-10b-ocn-v1,trust_remote_code=True,parallelize=True,load_in_4bit=True), gen_kwargs: (None), limit: 100.0, num_fewshot: None, batch_size: 1
| Tasks |Version|Filter|n-shot| Metric |Value| |Stderr|
|---------------------|-------|------|-----:|--------|----:|---|-----:|
|pubmedqa | 1|none | 0|acc | 0.95|± |0.0219|
|medmcqa |Yaml |none | 0|acc | 0.42|± |0.0496|
|professional_medicine| 0|none | 0|acc | 0.72|± |0.0451|
|college_medicine | 0|none | 0|acc | 0.67|± |0.0473|
|clinical_knowledge | 0|none | 0|acc | 0.64|± |0.0482|
|ocn |Yaml |none | 0|acc | 0.83|± |0.0378|
|aocnp |Yaml |none | 0|acc | 0.72|± |0.0451|
|
ramsi-k/LunarLander-v2-fromscratch-tune | ramsi-k | 2024-02-07T09:56:52Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-07T09:51:41Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -194.56 +/- 121.41
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.001
'num_envs': 64
'num_steps': 32
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'ramsi-k/LunarLander-v2-fromscratch-tune'
'batch_size': 2048
'minibatch_size': 512}
```
|
Pankaj001/Flower-Dataset-Resnet50-180 | Pankaj001 | 2024-02-07T09:54:02Z | 0 | 0 | tf-keras | [
"tf-keras",
"image-classification",
"license:apache-2.0",
"region:us"
] | image-classification | 2024-01-18T08:47:21Z | ---
license: apache-2.0
metrics:
- accuracy
pipeline_tag: image-classification
---
# ResNet-50 Model for Flower Classification
This model is based on the ResNet-50 architecture and has been trained on a dataset of flower images.
## Model Details
- **Architecture**: ResNet-50
- **Input Size**: 180x180 pixels with 3 channels (RGB)
- **Data Preprocessing**: The model has been trained on normalized data.
- **Model Accuracy**: 80%
-
## Usage
You can use this model for flower image classification tasks. Below are some code snippets to help you get started:
flowers_url: "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz"
---
license: apache-2.0
language:
- en
library_name: keras
--- |
mzbac/phi-2-2x3 | mzbac | 2024-02-07T09:53:30Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"phi2moe",
"text-generation",
"custom_code",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-04T05:38:05Z | ---
license: mit
language:
- en
---
A Moe model built on top of microsoft/phi-2, g-ronimo/phi-2-OpenHermes-2.5 and mlx-community/phi-2-dpo-7k, random init gates weights
## Example
```
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import torch
DEV = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model_name_or_path = "mzbac/phi2-2x3"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
trust_remote_code=True,
torch_dtype=torch.bfloat16,
)
model.to(DEV)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Instruct: how backpropagation works.\nOutput:"
print("\n\n*** Generate:")
inputs = tokenizer.encode(prompt, return_tensors="pt").to(DEV)
generate_kwargs = dict(
input_ids=inputs,
temperature=0.3,
max_new_tokens=500,
do_sample=True,
)
outputs = model.generate(**generate_kwargs)
print(tokenizer.decode(outputs[0]))
``` |
romil9/rvctraintest | romil9 | 2024-02-07T09:51:46Z | 0 | 0 | null | [
"onnx",
"license:other",
"region:us"
] | null | 2024-02-07T06:35:36Z | ---
license: other
license_name: test
license_link: LICENSE
---
|
varun-v-rao/roberta-base-bn-adapter-895K-snli-model3 | varun-v-rao | 2024-02-07T09:46:34Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2024-02-07T08:57:02Z | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-bn-adapter-895K-snli-model3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bn-adapter-895K-snli-model3
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7710
- Accuracy: 0.7275
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.4273 | 1.0 | 8584 | 0.3416 | 0.8694 |
| 0.4019 | 2.0 | 17168 | 0.3206 | 0.8800 |
| 0.385 | 3.0 | 25752 | 0.3148 | 0.8821 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
JiajingChen/c | JiajingChen | 2024-02-07T09:42:37Z | 1 | 0 | transformers | [
"transformers",
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"endpoints_compatible",
"region:us"
] | reinforcement-learning | 2024-02-07T09:28:10Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: c
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.72
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
magus4450/speecht5_finetuned_voxpopuli_cs | magus4450 | 2024-02-07T09:42:35Z | 12 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-speech | 2024-02-07T06:06:45Z | ---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
- text-to-speech
model-index:
- name: speecht5_finetuned_voxpopuli_cs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_cs
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the facebook/voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.4831 | 7.14 | 1000 | 0.4424 |
| 0.468 | 14.27 | 2000 | 0.4310 |
| 0.4568 | 21.41 | 3000 | 0.4267 |
| 0.4604 | 28.55 | 4000 | 0.4251 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.2
- Datasets 2.14.7
- Tokenizers 0.15.0 |
ramsi-k/LunarLander-v2-fromscratch | ramsi-k | 2024-02-07T09:38:06Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-07T09:38:01Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -117.87 +/- 48.29
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'ramsi-k/LunarLander-v2-fromscratch'
'batch_size': 512
'minibatch_size': 128}
```
|
MaziyarPanahi/mistral-7b-finetuned-orca-dpo-v2-Mistral-7B-Instruct-v0.2-slerp-GGUF | MaziyarPanahi | 2024-02-07T09:36:53Z | 97 | 5 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"merge",
"mergekit",
"7b",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"lvkaokao/mistral-7b-finetuned-orca-dpo-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:MaziyarPanahi/mistral-7b-finetuned-orca-dpo-v2-Mistral-7B-Instruct-v0.2-slerp",
"base_model:quantized:MaziyarPanahi/mistral-7b-finetuned-orca-dpo-v2-Mistral-7B-Instruct-v0.2-slerp",
"conversational"
] | text-generation | 2024-01-24T13:28:23Z | ---
license: apache-2.0
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- merge
- mergekit
- 7b
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- lvkaokao/mistral-7b-finetuned-orca-dpo-v2
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
model_name: mistral-7b-finetuned-orca-dpo-v2-Mistral-7B-Instruct-v0.2-slerp-GGUF
base_model: MaziyarPanahi/mistral-7b-finetuned-orca-dpo-v2-Mistral-7B-Instruct-v0.2-slerp
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/mistral-7b-finetuned-orca-dpo-v2-Mistral-7B-Instruct-v0.2-slerp-GGUF](https://huggingface.co/MaziyarPanahi/mistral-7b-finetuned-orca-dpo-v2-Mistral-7B-Instruct-v0.2-slerp-GGUF)
- Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi)
- Original model: [MaziyarPanahi/mistral-7b-finetuned-orca-dpo-v2-Mistral-7B-Instruct-v0.2-slerp](https://huggingface.co/MaziyarPanahi/mistral-7b-finetuned-orca-dpo-v2-Mistral-7B-Instruct-v0.2-slerp)
## Description
[MaziyarPanahi/mistral-7b-finetuned-orca-dpo-v2-Mistral-7B-Instruct-v0.2-slerp-GGUF](https://huggingface.co/MaziyarPanahi/mistral-7b-finetuned-orca-dpo-v2-Mistral-7B-Instruct-v0.2-slerp-GGUF) contains GGUF format model files for [MaziyarPanahi/mistral-7b-finetuned-orca-dpo-v2-Mistral-7B-Instruct-v0.2-slerp](https://huggingface.co/MaziyarPanahi/mistral-7b-finetuned-orca-dpo-v2-Mistral-7B-Instruct-v0.2-slerp).
## How to use
Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models:
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
### Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: [MaziyarPanahi/mistral-7b-finetuned-orca-dpo-v2-Mistral-7B-Instruct-v0.2-slerp-GGUF](https://huggingface.co/MaziyarPanahi/mistral-7b-finetuned-orca-dpo-v2-Mistral-7B-Instruct-v0.2-slerp-GGUF) and below it, a specific filename to download, such as: mistral-7b-finetuned-orca-dpo-v2-Mistral-7B-Instruct-v0.2-slerp-GGUF.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download MaziyarPanahi/mistral-7b-finetuned-orca-dpo-v2-Mistral-7B-Instruct-v0.2-slerp-GGUF mistral-7b-finetuned-orca-dpo-v2-Mistral-7B-Instruct-v0.2-slerp-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
</details>
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download [MaziyarPanahi/mistral-7b-finetuned-orca-dpo-v2-Mistral-7B-Instruct-v0.2-slerp-GGUF](https://huggingface.co/MaziyarPanahi/mistral-7b-finetuned-orca-dpo-v2-Mistral-7B-Instruct-v0.2-slerp-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/mistral-7b-finetuned-orca-dpo-v2-Mistral-7B-Instruct-v0.2-slerp-GGUF mistral-7b-finetuned-orca-dpo-v2-Mistral-7B-Instruct-v0.2-slerp-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m mistral-7b-finetuned-orca-dpo-v2-Mistral-7B-Instruct-v0.2-slerp-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./mistral-7b-finetuned-orca-dpo-v2-Mistral-7B-Instruct-v0.2-slerp-GGUF.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./mistral-7b-finetuned-orca-dpo-v2-Mistral-7B-Instruct-v0.2-slerp-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) |
hoanghoavienvo/roberta-base-detect-cheapfake-ca1-ca2 | hoanghoavienvo | 2024-02-07T09:36:29Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-07T09:32:30Z | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: roberta-base-detect-cheapfake-ca1-ca2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-detect-cheapfake-ca1-ca2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1482
- Accuracy: 0.94
- F1: 0.9450
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 38 | 0.6724 | 0.705 | 0.7807 |
| No log | 2.0 | 76 | 0.5437 | 0.925 | 0.9309 |
| No log | 3.0 | 114 | 0.1945 | 0.93 | 0.9340 |
| No log | 4.0 | 152 | 0.1559 | 0.94 | 0.9444 |
| No log | 5.0 | 190 | 0.1482 | 0.94 | 0.9450 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
MaziyarPanahi/NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp-GGUF | MaziyarPanahi | 2024-02-07T09:36:23Z | 19 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"merge",
"mergekit",
"7b",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"Sao10K/NyakuraV2.1-m7",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:MaziyarPanahi/NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp",
"base_model:quantized:MaziyarPanahi/NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp",
"conversational"
] | text-generation | 2024-01-24T14:03:24Z | ---
license: apache-2.0
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- merge
- mergekit
- 7b
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- Sao10K/NyakuraV2.1-m7
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
model_name: NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp-GGUF
base_model: MaziyarPanahi/NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp-GGUF](https://huggingface.co/MaziyarPanahi/NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp-GGUF)
- Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi)
- Original model: [MaziyarPanahi/NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp](https://huggingface.co/MaziyarPanahi/NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp)
## Description
[MaziyarPanahi/NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp-GGUF](https://huggingface.co/MaziyarPanahi/NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp-GGUF) contains GGUF format model files for [MaziyarPanahi/NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp](https://huggingface.co/MaziyarPanahi/NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp).
## How to use
Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models:
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
### Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: [MaziyarPanahi/NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp-GGUF](https://huggingface.co/MaziyarPanahi/NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp-GGUF) and below it, a specific filename to download, such as: NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp-GGUF.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download MaziyarPanahi/NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp-GGUF NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
</details>
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download [MaziyarPanahi/NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp-GGUF](https://huggingface.co/MaziyarPanahi/NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp-GGUF NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp-GGUF.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) |
CLMBR/det-noun-lstm-1 | CLMBR | 2024-02-07T09:28:50Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"rnn",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-02-01T11:59:17Z | ---
tags:
- generated_from_trainer
model-index:
- name: det-noun-lstm-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# det-noun-lstm-1
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9717
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.8048 | 0.03 | 76320 | 4.7692 |
| 4.5159 | 1.03 | 152640 | 4.4852 |
| 4.3691 | 0.03 | 228960 | 4.3476 |
| 4.2797 | 1.03 | 305280 | 4.2637 |
| 4.2204 | 0.03 | 381600 | 4.2065 |
| 4.1733 | 1.03 | 457920 | 4.1648 |
| 4.1326 | 0.03 | 534240 | 4.1336 |
| 4.0967 | 1.03 | 610560 | 4.1082 |
| 4.0679 | 0.03 | 686880 | 4.0879 |
| 4.0421 | 1.03 | 763200 | 4.0721 |
| 4.0218 | 0.03 | 839520 | 4.0580 |
| 4.0062 | 1.03 | 915840 | 4.0475 |
| 3.9891 | 0.03 | 992160 | 4.0381 |
| 3.9682 | 0.03 | 1068480 | 4.0299 |
| 3.9583 | 1.03 | 1144800 | 4.0224 |
| 3.9536 | 0.03 | 1221120 | 4.0173 |
| 3.9398 | 1.03 | 1297440 | 4.0119 |
| 3.9296 | 0.03 | 1373760 | 4.0071 |
| 3.9182 | 1.03 | 1450080 | 4.0036 |
| 3.9138 | 0.03 | 1526400 | 4.0002 |
| 3.9124 | 1.03 | 1602720 | 3.9966 |
| 3.9072 | 0.03 | 1679040 | 3.9941 |
| 3.9015 | 1.03 | 1755360 | 3.9915 |
| 3.8912 | 0.03 | 1831680 | 3.9895 |
| 3.8851 | 1.03 | 1908000 | 3.9876 |
| 3.8767 | 0.03 | 1984320 | 3.9853 |
| 3.8708 | 0.03 | 2060640 | 3.9833 |
| 3.8676 | 1.03 | 2136960 | 3.9817 |
| 3.8631 | 0.03 | 2213280 | 3.9802 |
| 3.8513 | 1.03 | 2289600 | 3.9791 |
| 3.8494 | 0.03 | 2365920 | 3.9776 |
| 3.8548 | 1.03 | 2442240 | 3.9767 |
| 3.8471 | 0.03 | 2518560 | 3.9757 |
| 3.8443 | 0.03 | 2594880 | 3.9748 |
| 3.8389 | 1.03 | 2671200 | 3.9741 |
| 3.8405 | 0.03 | 2747520 | 3.9735 |
| 3.8435 | 1.03 | 2823840 | 3.9728 |
| 3.844 | 0.03 | 2900160 | 3.9724 |
| 3.8434 | 0.03 | 2976480 | 3.9719 |
| 3.8385 | 0.02 | 3052726 | 3.9717 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
EnDevSols/tinyllama-3T-64k-JSONExtractor | EnDevSols | 2024-02-07T09:27:43Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-07T09:26:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JackCloudman/Senku-70B-Full-exl2-3.5bpw | JackCloudman | 2024-02-07T09:27:26Z | 7 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:cc-by-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-07T08:04:36Z | ---
license: cc-by-2.0
---
Finetune of miqu-70b-sf dequant of miqudev's leak of Mistral-70B (allegedly an early mistral medium). My diffs are available under CC-0, this is a merge with the leaked model, you can use the other repository to save bandwidth.
EQ-Bench: 84.89
Will run more benches later. |
yeye776/OndeviceAI-base-v2 | yeye776 | 2024-02-07T09:18:42Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:paust/pko-t5-base",
"base_model:finetune:paust/pko-t5-base",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-07T09:18:11Z | ---
license: cc-by-4.0
base_model: paust/pko-t5-base
tags:
- generated_from_trainer
model-index:
- name: OndeviceAI-base-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# OndeviceAI-base-v2
This model is a fine-tuned version of [paust/pko-t5-base](https://huggingface.co/paust/pko-t5-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0007
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
Artefact2/Gembo-v1-70b-GGUF | Artefact2 | 2024-02-07T09:11:38Z | 20 | 6 | null | [
"gguf",
"en",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-02-07T04:55:10Z | ---
license: llama2
language:
- en
---
These are GGUF quantized versions of [ChuckMcSneed/Gembo-v1-70b](https://huggingface.co/ChuckMcSneed/Gembo-v1-70b).
The importance matrix was trained for 100K tokens (200 batches of 512 tokens) using `wiki.train.raw`.
The IQ2_XXS and IQ2_XS versions are compatible with llama.cpp, version `147b17a` or later. The IQ3_XXS requires version `f4d7e54` or later.
Some model files above 50GB are split into smaller files. To concatenate them, use the `cat` command (on Windows, use PowerShell): `cat foo-Q6_K.gguf.* > foo-Q6_K.gguf` |
phamtungthuy/law_model_merged | phamtungthuy | 2024-02-07T09:07:12Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mpt",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-07T09:05:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
phamtungthuy/quantized_law_model_merged | phamtungthuy | 2024-02-07T09:02:02Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mpt",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-02-07T09:01:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Wembo/ppo-self-LunarLander-v2 | Wembo | 2024-02-07T09:01:30Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-07T08:45:25Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 20.77 +/- 54.16
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 500000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Wembo/ppo-self-LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
varun-v-rao/roberta-base-bn-adapter-895K-snli-model2 | varun-v-rao | 2024-02-07T08:56:59Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2024-02-07T08:09:03Z | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-bn-adapter-895K-snli-model2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bn-adapter-895K-snli-model2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7648
- Accuracy: 0.7315
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.4332 | 1.0 | 8584 | 0.3469 | 0.8699 |
| 0.4008 | 2.0 | 17168 | 0.3200 | 0.8780 |
| 0.3889 | 3.0 | 25752 | 0.3143 | 0.8805 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
mtgv/MobileVLM_V2-7B | mtgv | 2024-02-07T08:55:39Z | 106 | 5 | transformers | [
"transformers",
"pytorch",
"mobilevlm",
"text-generation",
"MobileVLM V2",
"arxiv:2402.03766",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-06T09:16:05Z | ---
license: apache-2.0
tags:
- MobileVLM V2
---
## Model Summery
MobileVLM V2 is a family of significantly improved vision language models upon MobileVLM, which proves that a delicate orchestration of novel architectural design, an improved training scheme tailored for mobile VLMs, and rich high-quality dataset curation can substantially benefit VLMs’ performance. Specifically, MobileVLM V2 1.7B achieves better or on-par performance on standard VLM benchmarks compared with much larger VLMs at the 3B scale. Notably, MobileVLM_V2-3B model outperforms a large variety of VLMs at the 7B+ scale.
The MobileVLM_V2-7B was built on [Vicuna-7B-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) to facilitate the off-the-shelf deployment.
## Model Sources
- Repository: https://github.com/Meituan-AutoML/MobileVLM
- Paper: [MobileVLM V2: Faster and Stronger Baseline for Vision Language Model](https://arxiv.org/abs/2402.03766)
## How to Get Started with the Model
Inference examples can be found at [Github](https://github.com/Meituan-AutoML/MobileVLM).
|
varun-v-rao/opt-1.3b-lora-3.15M-snli-model3 | varun-v-rao | 2024-02-07T08:47:47Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"opt",
"text-classification",
"generated_from_trainer",
"base_model:facebook/opt-1.3b",
"base_model:finetune:facebook/opt-1.3b",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-07T02:16:53Z | ---
license: other
base_model: facebook/opt-1.3b
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: opt-1.3b-lora-3.15M-snli-model3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-1.3b-lora-3.15M-snli-model3
This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6832
- Accuracy: 0.761
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 49
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3553 | 1.0 | 4292 | 0.2816 | 0.8942 |
| 0.3227 | 2.0 | 8584 | 0.2643 | 0.9043 |
| 0.3151 | 3.0 | 12876 | 0.2574 | 0.9076 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
mikeee/phi-2-ft-evol-instruct-chinese-gpt4 | mikeee | 2024-02-07T08:33:08Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-07T08:33:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
muzammil-eds/tinyllama-3T-64k-JSONExtractor-v4 | muzammil-eds | 2024-02-07T08:22:45Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-07T08:21:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Aanshula/layoutlm-funsd-tf | Aanshula | 2024-02-07T08:10:23Z | 46 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"layoutlm",
"token-classification",
"generated_from_keras_callback",
"base_model:microsoft/layoutlm-base-uncased",
"base_model:finetune:microsoft/layoutlm-base-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-02-06T05:06:14Z | ---
license: mit
base_model: microsoft/layoutlm-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: Aanshula/layoutlm-funsd-tf
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Aanshula/layoutlm-funsd-tf
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3182
- Validation Loss: 0.6807
- Train Overall Precision: 0.7172
- Train Overall Recall: 0.7878
- Train Overall F1: 0.7508
- Train Overall Accuracy: 0.7864
- Epoch: 6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Train Overall Precision | Train Overall Recall | Train Overall F1 | Train Overall Accuracy | Epoch |
|:----------:|:---------------:|:-----------------------:|:--------------------:|:----------------:|:----------------------:|:-----:|
| 1.7000 | 1.4167 | 0.2445 | 0.2107 | 0.2264 | 0.4831 | 0 |
| 1.1656 | 0.8677 | 0.5749 | 0.6257 | 0.5992 | 0.7251 | 1 |
| 0.7704 | 0.7254 | 0.6356 | 0.7160 | 0.6734 | 0.7637 | 2 |
| 0.5758 | 0.6690 | 0.6851 | 0.7476 | 0.7150 | 0.7857 | 3 |
| 0.4526 | 0.6096 | 0.7085 | 0.7757 | 0.7406 | 0.8046 | 4 |
| 0.3614 | 0.6834 | 0.7118 | 0.7657 | 0.7377 | 0.7872 | 5 |
| 0.3182 | 0.6807 | 0.7172 | 0.7878 | 0.7508 | 0.7864 | 6 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.1
|
varun-v-rao/roberta-base-bn-adapter-895K-snli-model1 | varun-v-rao | 2024-02-07T08:09:01Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2024-02-06T04:35:02Z | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-bn-adapter-895K-snli-model1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bn-adapter-895K-snli-model1
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7623
- Accuracy: 0.728
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 61
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.4254 | 1.0 | 8584 | 0.3365 | 0.8722 |
| 0.4021 | 2.0 | 17168 | 0.3165 | 0.8790 |
| 0.3806 | 3.0 | 25752 | 0.3115 | 0.8817 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
empty-michael/tinystories_1layer_attn_mlp_C10k_k100 | empty-michael | 2024-02-07T08:05:58Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"codebook",
"generated_from_trainer",
"dataset:roneneldan/TinyStories",
"base_model:roneneldan/TinyStories-1Layer-21M",
"base_model:finetune:roneneldan/TinyStories-1Layer-21M",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-02-07T04:43:01Z | ---
base_model: roneneldan/TinyStories-1Layer-21M
tags:
- generated_from_trainer
datasets:
- roneneldan/TinyStories
metrics:
- accuracy
model-index:
- name: tinystories_1layer_attn_mlp_C10k_k100
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: roneneldan/TinyStories
type: roneneldan/TinyStories
metrics:
- name: Accuracy
type: accuracy
value: 0.5429091526514649
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinystories_1layer_attn_mlp_C10k_k100
This model is a fine-tuned version of [roneneldan/TinyStories-1Layer-21M](https://huggingface.co/roneneldan/TinyStories-1Layer-21M) on the roneneldan/TinyStories dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8957
- Accuracy: 0.5429
- Multicode K: 1
- Dead Code Fraction/layer0: 0.0
- Mse/layer0: 611.1572
- Input Norm/layer0: 31.9975
- Output Norm/layer0: 15.0872
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 6
- total_train_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Multicode K | Dead Code Fraction/layer0 | Mse/layer0 | Input Norm/layer0 | Output Norm/layer0 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-----------:|:-------------------------:|:----------:|:-----------------:|:------------------:|
| 2.5072 | 0.05 | 500 | 2.4764 | 0.4579 | 1 | 0.0 | 841.1602 | 31.9977 | 4.9114 |
| 2.2285 | 0.1 | 1000 | 2.2265 | 0.4926 | 1 | 0.0 | 792.3023 | 31.9980 | 7.5524 |
| 2.1472 | 0.16 | 1500 | 2.1584 | 0.5025 | 1 | 0.0 | 761.8683 | 31.9980 | 8.9239 |
| 2.1144 | 0.21 | 2000 | 2.1128 | 0.5090 | 1 | 0.0 | 737.1843 | 31.9979 | 9.8992 |
| 2.0847 | 0.26 | 2500 | 2.0791 | 0.5142 | 1 | 0.0 | 716.9390 | 31.9979 | 10.6577 |
| 2.0439 | 0.31 | 3000 | 2.0482 | 0.5185 | 1 | 0.0 | 698.7266 | 31.9979 | 11.3599 |
| 2.0263 | 0.37 | 3500 | 2.0253 | 0.5224 | 1 | 0.0 | 682.2680 | 31.9979 | 12.0105 |
| 1.9906 | 0.42 | 4000 | 2.0066 | 0.5253 | 1 | 0.0 | 669.1965 | 31.9979 | 12.5568 |
| 1.9852 | 0.47 | 4500 | 1.9898 | 0.5279 | 1 | 0.0 | 657.5872 | 31.9979 | 13.0526 |
| 1.9687 | 0.52 | 5000 | 1.9757 | 0.5300 | 1 | 0.0 | 648.2462 | 31.9979 | 13.4496 |
| 1.9672 | 0.57 | 5500 | 1.9620 | 0.5321 | 1 | 0.0 | 640.0822 | 31.9978 | 13.8078 |
| 1.9441 | 0.63 | 6000 | 1.9513 | 0.5339 | 1 | 0.0 | 633.8831 | 31.9978 | 14.1018 |
| 1.9408 | 0.68 | 6500 | 1.9397 | 0.5358 | 1 | 0.0 | 628.0929 | 31.9977 | 14.3550 |
| 1.9256 | 0.73 | 7000 | 1.9302 | 0.5374 | 1 | 0.0 | 623.2726 | 31.9977 | 14.5534 |
| 1.9204 | 0.78 | 7500 | 1.9225 | 0.5381 | 1 | 0.0 | 619.4573 | 31.9977 | 14.7258 |
| 1.907 | 0.84 | 8000 | 1.9150 | 0.5393 | 1 | 0.0 | 616.4379 | 31.9976 | 14.8625 |
| 1.8931 | 0.89 | 8500 | 1.9076 | 0.5408 | 1 | 0.0 | 613.7874 | 31.9976 | 14.9685 |
| 1.9021 | 0.94 | 9000 | 1.9021 | 0.5417 | 1 | 0.0 | 612.0126 | 31.9975 | 15.0379 |
| 1.8967 | 0.99 | 9500 | 1.8970 | 0.5426 | 1 | 0.0 | 610.6121 | 31.9975 | 15.0932 |
| 1.8942 | 1.04 | 10000 | 1.8957 | 0.5429 | 1 | 0.0 | 611.1572 | 31.9975 | 15.0872 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
yeye776/OndeviceAI-base-v1 | yeye776 | 2024-02-07T07:40:41Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:paust/pko-t5-base",
"base_model:finetune:paust/pko-t5-base",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-07T07:40:06Z | ---
license: cc-by-4.0
base_model: paust/pko-t5-base
tags:
- generated_from_trainer
model-index:
- name: OndeviceAI-base-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# OndeviceAI-base-v1
This model is a fine-tuned version of [paust/pko-t5-base](https://huggingface.co/paust/pko-t5-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0007
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
omartariq612/quran-lora-whisper-medium-epoch-1 | omartariq612 | 2024-02-07T07:40:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-07T07:39:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
chenhaodev/mistral-7b-ocn-v2 | chenhaodev | 2024-02-07T07:22:09Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:other",
"region:us"
] | null | 2024-02-07T07:07:17Z | ---
license: other
library_name: peft
tags:
- llama-factory
- lora
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: mistral-7b-ocn-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-7b-ocn-v2
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the oncc_medqa_instruct dataset.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.1.1+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
### Performance
hf (pretrained=mistralai/Mistral-7B-v0.1,parallelize=True,load_in_4bit=True,peft=chenhugging/mistral-7b-ocn-v2), gen_kwargs: (None), limit: 100.0, num_fewshot: None, batch_size: 1
| Tasks |Version|Filter|n-shot| Metric |Value| |Stderr|
|---------------------|-------|------|-----:|--------|----:|---|-----:|
|pubmedqa | 1|none | 0|acc | 0.98|± |0.0141|
|medmcqa |Yaml |none | 0|acc | 0.40|± |0.0492|
|professional_medicine| 0|none | 0|acc | 0.69|± |0.0465|
|college_medicine | 0|none | 0|acc | 0.53|± |0.0502|
|clinical_knowledge | 0|none | 0|acc | 0.59|± |0.0494|
|ocn |Yaml |none | 0|acc | 0.80|± |0.0402|
|aocnp |Yaml |none | 0|acc | 0.63|± |0.0485|
|
TooMuchInfo/LeerdoelenGPT | TooMuchInfo | 2024-02-07T07:21:36Z | 0 | 0 | null | [
"education",
"nl",
"region:us"
] | null | 2024-02-07T07:21:03Z | ---
language:
- nl
tags:
- education
--- |
areegtarek/patientcommunication-8bit | areegtarek | 2024-02-07T07:17:24Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-02-07T07:13:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ergh0/q-FrozenLake-v1-4x4-noSlippery | ergh0 | 2024-02-07T07:15:43Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-07T07:11:23Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ergh0/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
BanglaLLM/bangla-llama-13b-base-v0.1 | BanglaLLM | 2024-02-07T07:13:42Z | 163 | 6 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"bn",
"en",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-07T07:04:13Z | ---
language:
- bn
- en
license: llama2
---
# Bangla LLaMA 13B Base v0.1 [pre-trained]
Welcome to the inaugural release of the Bangla LLaMA 13B base model – an important step in advancing LLMs for the Bangla language. This model is ready for immediate inference and is also primed for further fine-tuning to cater to your specific NLP tasks.
> **Please Note:** This model, labeled as a foundational Bangla Language Model (LLM), is designed primarily for Causal Language Modeling (LM) purposes. In other words, if you are looking for an instruction following model in Bangla, you may find [BanglaLLM/bangla-llama-13b-instruct-v0.1](https://huggingface.co/BanglaLLM/bangla-llama-13b-instruct-v0.1) more suitable for your needs.
## Model description
The Bangla LLaMA models have been enhanced and tailored specifically with an extensive Bangla vocabulary of 16,000 tokens, building upon the foundation set by the original LLaMA-2.
- **Model type:** A 13B parameter model for Causal LM pre-trained on [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX) dataset's Bangla subset.
- **Language(s):** Bangla and English
- **License:** GNU General Public License v3.0
- **Source Model:** [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf)
- **Training Precision:** `float16`
- **Code:** [GitHub](https://github.com/abhinand5/bangla-llama)
## Related Models
| Model | Type | Data | Base Model | # Params | Download Links |
|--------------------------|-----------------------------|-------------------|----------------------|------|------------------------------------------------------------------------|
| Bangla LLaMA 7B Base | Base model | 12GB | LLaMA 7B | 7B | [HF Hub](https://huggingface.co/BanglaLLM/bangla-llama-7b-base-v0.1) |
| Bangla LLaMA 13B Base | Base model | 4GB | LLaMA 13B | 13B | [HF Hub](https://huggingface.co/BanglaLLM/bangla-llama-13b-base-v0.1) |
| Bangla LLaMA 7B Instruct | Instruction following model | 145k instructions | Bangla LLaMA 7B Base | 7B | [HF Hub](https://huggingface.co/BanglaLLM/bangla-llama-7b-instruct-v0.1) |
| Bangla LLaMA 13B Instruct | Instruction following model | 145k instructions | Bangla LLaMA 13B Base | 13B | [HF Hub](BanglaLLM/bangla-llama-13b-instruct-v0.1) |
## Usage Note
It's important to note that the models have not undergone detoxification. Therefore, while they possess impressive linguistic capabilities, there is a possibility for them to generate content that could be deemed harmful or offensive. We urge users to exercise discretion and supervise the model's outputs closely, especially in public or sensitive applications.
## Meet the Developers
Get to know the creators behind this innovative model and follow their contributions to the field:
- [Abdullah Khan Zehady](https://www.linkedin.com/in/abdullah-khan-zehady-915ba024/)
## Citation
We hope this model serves as a valuable tool in your NLP toolkit and look forward to seeing the advancements it will enable in the understanding and generation of the Bangla language. |
varun-v-rao/bert-large-cased-bn-adapter-3.17M-snli-model3 | varun-v-rao | 2024-02-07T07:12:22Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"base_model:google-bert/bert-large-cased",
"base_model:finetune:google-bert/bert-large-cased",
"license:apache-2.0",
"region:us"
] | null | 2024-02-07T04:47:02Z | ---
license: apache-2.0
base_model: bert-large-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-large-cased-bn-adapter-3.17M-snli-model3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-cased-bn-adapter-3.17M-snli-model3
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7627
- Accuracy: 0.7315
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 61
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.4101 | 1.0 | 8584 | 0.3392 | 0.8718 |
| 0.3707 | 2.0 | 17168 | 0.3116 | 0.8842 |
| 0.3628 | 3.0 | 25752 | 0.3035 | 0.8879 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
rhplus0831/maid-yuzu-v5-mix-exl2-6.0bpw-rpcal | rhplus0831 | 2024-02-07T07:08:45Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:smelborp/MixtralOrochi8x7B",
"base_model:finetune:smelborp/MixtralOrochi8x7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-07T07:01:47Z | ---
base_model:
- smelborp/MixtralOrochi8x7B
library_name: transformers
tags:
- mergekit
- merge
---
# maid-yuzu-v5-mix
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
This model was created because I was curious about whether the 8X7B model created randomly by the user would be merged with other existing 8x7b models.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* ../maid-yuzu-v5
* [smelborp/MixtralOrochi8x7B](https://huggingface.co/smelborp/MixtralOrochi8x7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model:
model:
path: ../maid-yuzu-v5
dtype: bfloat16
merge_method: slerp
parameters:
t:
- value: 0.5
slices:
- sources:
- layer_range: [0, 32]
model:
model:
path: smelborp/MixtralOrochi8x7B
- layer_range: [0, 32]
model:
model:
path: ../maid-yuzu-v5
```
|
huolongguo10/LLM_detect | huolongguo10 | 2024-02-07T07:06:11Z | 12 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-05T13:19:11Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to detect text that was generated by LLMs.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** huolongguo10
- **Model type:** bert
- **Language(s) (NLP):** Chinese
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** bert-base-chinese
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("huolongguo10/LLM_detect")
model = AutoModelForMaskedLM.from_pretrained("huolongguo10/LLM_detect")
```
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** fp32 <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** P100
- **Hours used:** 4h
- **Cloud Provider:** kaggle
## Technical Specifications [optional]
### Model Architecture and Objective
bert
### Compute Infrastructure
[More Information Needed]
#### Hardware
P100
#### Software
transformers
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
psyferpunk/mine | psyferpunk | 2024-02-07T07:05:01Z | 0 | 0 | bertopic | [
"bertopic",
"aa",
"dataset:HuggingFaceM4/WebSight",
"license:mit",
"region:us"
] | null | 2024-02-07T07:04:05Z | ---
license: mit
datasets:
- HuggingFaceM4/WebSight
language:
- aa
metrics:
- accuracy
library_name: bertopic
--- |
humung/koalpaca-polyglot-12.8B-ia3-vlending-v0.1 | humung | 2024-02-07T06:59:21Z | 1 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:beomi/KoAlpaca-Polyglot-12.8B",
"base_model:adapter:beomi/KoAlpaca-Polyglot-12.8B",
"region:us"
] | null | 2024-02-07T06:59:19Z | ---
library_name: peft
base_model: beomi/KoAlpaca-Polyglot-12.8B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
Pranav-10/Sentiment_analysis | Pranav-10 | 2024-02-07T06:53:03Z | 61 | 1 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-07T06:08:21Z | ---
license: apache-2.0
---
# Sentiment Analysis Model using DistilBERT
This repository hosts a sentiment analysis model fine-tuned on the IMDb movie reviews dataset using DistilBERT architecture. It's designed to classify text inputs into positive or negative sentiment categories.
## Model Description
The model is based on the DistilBERT architecture, a smaller, faster, cheaper, and lighter version of BERT. It has been fine-tuned on the IMDb dataset, which consists of 50,000 movie reviews labeled as positive or negative.
DistilBERT has been proven to retain most of the performance of BERT while being more efficient. This makes it an excellent choice for sentiment analysis tasks where the model's size and speed are essential.
## How to Use
To use the model, you will need to install the `transformers` library from Hugging Face. You can install it using pip:
pip install transformers
Once installed, you can use the following code to classify text using this model:
from transformers import DistilBertTokenizer, DistilBertForSequenceClassification
import torch
# Load the tokenizer and model from the Hugging Face Hub
tokenizer = DistilBertTokenizer.from_pretrained(Pranav-10/Sentimental_Analysis)
model = DistilBertForSequenceClassification.from_pretrained(Pranav-10/Sentimental_Analysis)
# Example text
text = "I loved this movie. The performances were fantastic!"
# Tokenize text and convert to tensor
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True, max_length=512)
# Predict sentiment
with torch.no_grad():
logits = model(**inputs).logits
# Convert logits to probabilities using softmax
probabilities = torch.softmax(logits, dim=-1)
# Output the result
print(probabilities)
Evaluation Results
The model achieved the following performance on the IMDb dataset:
Accuracy: 90%
Precision: 89%
Recall: 91%
F1 Score: 90%
These results indicate the model's high efficiency in classifying sentiments as positive or negative.
Training Procedure
The model was trained using the following procedure:
Pre-processing: The dataset was pre-processed by converting all reviews to lowercase and tokenizing using the DistilBERT tokenizer.
Optimization: We used the Adam optimizer with a learning rate of 2e-5, a batch size of 16, and trained the model for 3 epochs.
Hardware: Training was performed on a single NVIDIA GTX 1650 GPU.
|
EricValen/ppo-LunarLander-v2 | EricValen | 2024-02-07T06:18:51Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-07T06:18:24Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 270.77 +/- 22.88
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
danaleee/dog | danaleee | 2024-02-07T06:16:25Z | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-02-07T05:48:21Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - danaleee/dog
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
yaneq/jan_zdRM8UdoamtJ6kdZKNKS_SDXL_LoRA_700_9d94_700_1e4 | yaneq | 2024-02-07T06:10:46Z | 1 | 1 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-02-07T06:10:43Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of MDDL man
license: openrail++
---
# SDXL LoRA DreamBooth - yaneq/jan_zdRM8UdoamtJ6kdZKNKS_SDXL_LoRA_700_9d94_700_1e4
<Gallery />
## Model description
These are yaneq/jan_zdRM8UdoamtJ6kdZKNKS_SDXL_LoRA_700_9d94_700_1e4 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of MDDL man to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](yaneq/jan_zdRM8UdoamtJ6kdZKNKS_SDXL_LoRA_700_9d94_700_1e4/tree/main) them in the Files & versions tab.
## Training properties
- max_train_steps: 700
- learning_rate: 0.0001
- base_model_name: stabilityai/stable-diffusion-xl-base-1.0
- class_name: man
- training_images_urls: - https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2FWF2NGBPUFgu9eyaCYAwB.jpg?alt=media&token=97c1e215-0a96-4fdf-b292-9ee0e497ba72
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2Fcn54hvM4ahi3MzpCQN5D.jpg?alt=media&token=e096f4dc-e7c5-4e14-88fc-a5562d103127
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2Fz8D9WdMIx4mXcsDGAZm4.jpg?alt=media&token=fded9422-eb7c-4757-8c1f-cb436a348579
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2F6JW19SVZPczh5B2DEqKD.jpg?alt=media&token=0e0dc94f-957d-4b51-8979-0216c0849cf6
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2FDAk5k1hGzP9q9y0jpGoO.jpg?alt=media&token=01ed67d1-938a-4f60-bc1a-e1b91412b97e
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2F82McawlxnTeA2vBc4bZg.jpg?alt=media&token=f7cfacb2-2186-4005-9211-b7ef762dafad
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2FY7nFiafx8co1nK6cnjWJ.jpg?alt=media&token=a1fe8c9a-4d5e-4043-9a82-9304fd430569
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2FVYOVRhojKt30NzjWRXL0.jpg?alt=media&token=5a3a2afb-4b83-4488-92e5-6651f5173cc0
- gradient_accumulation_steps: 3
- GPU: T4
- duration: 5284.340887546539
|
saraswathi01/a2c-PandaPickAndPlace-v3 | saraswathi01 | 2024-02-07T06:10:16Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaPickAndPlace-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-07T06:06:06Z | ---
library_name: stable-baselines3
tags:
- PandaPickAndPlace-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaPickAndPlace-v3
type: PandaPickAndPlace-v3
metrics:
- type: mean_reward
value: -50.00 +/- 0.00
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaPickAndPlace-v3**
This is a trained model of a **A2C** agent playing **PandaPickAndPlace-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
VishalMishraTss/deit-base-patch16-224-finetuned-ind-14-imbalanced-pan-10847-train | VishalMishraTss | 2024-02-07T06:08:11Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-02-07T05:07:47Z | ---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: deit-base-patch16-224-finetuned-ind-14-imbalanced-pan-10847-train
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8703170028818443
- name: Recall
type: recall
value: 0.8703170028818443
- name: F1
type: f1
value: 0.8411548955923809
- name: Precision
type: precision
value: 0.8252839064351536
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deit-base-patch16-224-finetuned-ind-14-imbalanced-pan-10847-train
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4660
- Accuracy: 0.8703
- Recall: 0.8703
- F1: 0.8412
- Precision: 0.8253
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.7292 | 0.99 | 43 | 0.6759 | 0.7925 | 0.7925 | 0.7582 | 0.7420 |
| 0.5224 | 2.0 | 87 | 0.5146 | 0.8501 | 0.8501 | 0.8228 | 0.8057 |
| 0.5103 | 2.97 | 129 | 0.4916 | 0.8674 | 0.8674 | 0.8391 | 0.8244 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
ZiHDeng/peft-lora-starcoder1B-Instruction-ny8-ALL | ZiHDeng | 2024-02-07T06:07:53Z | 5 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:bigcode/starcoderbase-1b",
"base_model:adapter:bigcode/starcoderbase-1b",
"license:bigcode-openrail-m",
"region:us"
] | null | 2024-02-07T03:55:10Z | ---
license: bigcode-openrail-m
library_name: peft
tags:
- generated_from_trainer
base_model: bigcode/starcoderbase-1b
model-index:
- name: peft-lora-starcoder1B-Instruction-ny8-ALL
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# peft-lora-starcoder1B-Instruction-ny8-ALL
This model is a fine-tuned version of [bigcode/starcoderbase-1b](https://huggingface.co/bigcode/starcoderbase-1b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0870
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 30
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1891 | 0.05 | 100 | 0.1452 |
| 0.1244 | 0.1 | 200 | 0.1096 |
| 0.1077 | 0.15 | 300 | 0.1006 |
| 0.0996 | 0.2 | 400 | 0.0958 |
| 0.0953 | 0.25 | 500 | 0.0927 |
| 0.0916 | 0.3 | 600 | 0.0882 |
| 0.0875 | 0.35 | 700 | 0.0867 |
| 0.0845 | 0.4 | 800 | 0.0873 |
| 0.0818 | 0.45 | 900 | 0.0863 |
| 0.0788 | 0.5 | 1000 | 0.0848 |
| 0.0781 | 0.55 | 1100 | 0.0844 |
| 0.0749 | 0.6 | 1200 | 0.0847 |
| 0.0726 | 0.65 | 1300 | 0.0849 |
| 0.0688 | 0.7 | 1400 | 0.0867 |
| 0.0701 | 0.75 | 1500 | 0.0861 |
| 0.0662 | 0.8 | 1600 | 0.0863 |
| 0.0658 | 0.85 | 1700 | 0.0867 |
| 0.0647 | 0.9 | 1800 | 0.0869 |
| 0.0644 | 0.95 | 1900 | 0.0870 |
| 0.0657 | 1.0 | 2000 | 0.0870 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0 |
rombodawg/DeepMagic-Coder-7b | rombodawg | 2024-02-07T06:02:22Z | 8 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-06T19:58:50Z | ---
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL
---
DeepMagic-Coder-7b
(Note: From short testing, the Alt version generated much better code)
Alternate version:
- https://huggingface.co/rombodawg/DeepMagic-Coder-7b-Alt

This is an extremely successful merge of the deepseek-coder-6.7b-instruct and Magicoder-S-DS-6.7B models, bringing an uplift in overall coding performance without any compromise to the models integrity (at least with limited testing).
This is the first of my models to use the merge-kits *task_arithmetic* merging method. The method is detailed bellow, and its clearly very usefull for merging ai models that were fine-tuned from a common base:
Task Arithmetic:
```
Computes "task vectors" for each model by subtracting a base model.
Merges the task vectors linearly and adds back the base.
Works great for models that were fine tuned from a common ancestor.
Also a super useful mental framework for several of the more involved
merge methods.
```
The original models used in this merge can be found here:
- https://huggingface.co/ise-uiuc/Magicoder-S-DS-6.7B
- https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct
The Merge was created using Mergekit and the paremeters can be found bellow:
```yaml
models:
- model: deepseek-ai_deepseek-coder-6.7b-instruct
parameters:
weight: 1
- model: ise-uiuc_Magicoder-S-DS-6.7B
parameters:
weight: 1
merge_method: task_arithmetic
base_model: ise-uiuc_Magicoder-S-DS-6.7B
parameters:
normalize: true
int8_mask: true
dtype: float16
``` |
ChayanM/Image_Captioner | ChayanM | 2024-02-07T05:57:48Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-02-04T17:43:12Z | ---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: Image_Captioner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Image_Captioner
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0923
- Rouge1: 25.0369
- Rouge2: 10.1572
- Rougel: 21.5244
- Rougelsum: 24.0775
- Gen Len: 18.9946
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.253 | 1.0 | 836 | 0.1372 | 29.3958 | 12.2981 | 25.5129 | 27.9289 | 19.0 |
| 0.1361 | 2.0 | 1672 | 0.1151 | 25.8361 | 12.2894 | 23.7346 | 25.47 | 19.0 |
| 0.115 | 3.0 | 2508 | 0.1037 | 25.1859 | 11.9032 | 23.1038 | 24.8338 | 19.0 |
| 0.1027 | 4.0 | 3344 | 0.0942 | 26.0345 | 12.0324 | 23.4843 | 25.5426 | 19.0 |
| 0.0873 | 5.0 | 4180 | 0.0864 | 26.1657 | 11.685 | 23.6563 | 25.6247 | 19.0 |
| 0.0742 | 6.0 | 5016 | 0.0794 | 24.3621 | 10.5113 | 21.7192 | 23.8253 | 19.0 |
| 0.0646 | 7.0 | 5852 | 0.0740 | 24.711 | 11.194 | 22.2089 | 24.1793 | 19.0 |
| 0.0542 | 8.0 | 6688 | 0.0690 | 25.0339 | 10.8651 | 22.171 | 24.4106 | 19.0 |
| 0.046 | 9.0 | 7524 | 0.0650 | 25.0982 | 11.8399 | 22.701 | 24.623 | 18.9987 |
| 0.0386 | 10.0 | 8360 | 0.0623 | 26.2563 | 10.4715 | 22.5319 | 25.1412 | 18.9987 |
| 0.0317 | 11.0 | 9196 | 0.0591 | 26.4001 | 11.8031 | 23.1653 | 25.2856 | 18.9919 |
| 0.0273 | 12.0 | 10032 | 0.0587 | 25.6521 | 11.0174 | 22.7327 | 24.9068 | 18.9879 |
| 0.0231 | 13.0 | 10868 | 0.0583 | 26.7035 | 11.2021 | 23.0121 | 25.6384 | 18.9946 |
| 0.0195 | 14.0 | 11704 | 0.0592 | 25.5747 | 10.7424 | 22.3673 | 24.6944 | 19.0 |
| 0.0167 | 15.0 | 12540 | 0.0608 | 25.3022 | 10.163 | 21.9556 | 24.3587 | 18.9596 |
| 0.0142 | 16.0 | 13376 | 0.0614 | 25.0496 | 10.0656 | 21.7629 | 24.1094 | 18.9206 |
| 0.0119 | 17.0 | 14212 | 0.0618 | 26.0112 | 10.2519 | 22.1926 | 24.8873 | 18.8735 |
| 0.0102 | 18.0 | 15048 | 0.0653 | 25.6183 | 10.04 | 22.1136 | 24.5255 | 18.9125 |
| 0.0086 | 19.0 | 15884 | 0.0671 | 24.7352 | 9.6328 | 21.0675 | 23.7704 | 18.8694 |
| 0.0076 | 20.0 | 16720 | 0.0693 | 24.9512 | 9.6635 | 21.4761 | 23.9132 | 18.9112 |
| 0.0067 | 21.0 | 17556 | 0.0708 | 24.1732 | 9.158 | 20.3408 | 23.029 | 18.8358 |
| 0.0058 | 22.0 | 18392 | 0.0732 | 24.4503 | 9.4394 | 20.8584 | 23.4242 | 18.8035 |
| 0.0048 | 23.0 | 19228 | 0.0738 | 24.8844 | 9.9125 | 21.3509 | 23.9336 | 18.8089 |
| 0.0043 | 24.0 | 20064 | 0.0777 | 25.5401 | 10.1857 | 21.8328 | 24.4294 | 18.9058 |
| 0.0038 | 25.0 | 20900 | 0.0781 | 24.2235 | 9.0445 | 20.4463 | 23.0001 | 18.9166 |
| 0.0033 | 26.0 | 21736 | 0.0801 | 25.0127 | 9.8025 | 21.3116 | 23.9683 | 18.7308 |
| 0.0029 | 27.0 | 22572 | 0.0807 | 24.5765 | 9.6283 | 20.9556 | 23.4559 | 18.9166 |
| 0.0027 | 28.0 | 23408 | 0.0830 | 24.8389 | 9.8899 | 21.4027 | 23.9416 | 18.9233 |
| 0.0024 | 29.0 | 24244 | 0.0833 | 25.3695 | 10.162 | 21.7865 | 24.3737 | 18.7106 |
| 0.0022 | 30.0 | 25080 | 0.0832 | 24.8804 | 10.0825 | 21.4621 | 24.0326 | 18.9287 |
| 0.0021 | 31.0 | 25916 | 0.0853 | 25.0049 | 9.7036 | 21.3664 | 23.9173 | 18.9044 |
| 0.0019 | 32.0 | 26752 | 0.0855 | 25.0529 | 9.4994 | 21.2781 | 24.0076 | 18.9125 |
| 0.002 | 33.0 | 27588 | 0.0852 | 24.8417 | 9.9376 | 21.2526 | 23.8552 | 18.9031 |
| 0.0015 | 34.0 | 28424 | 0.0857 | 24.6359 | 9.5179 | 20.8941 | 23.4553 | 18.8937 |
| 0.0014 | 35.0 | 29260 | 0.0858 | 25.1156 | 10.1869 | 21.5805 | 23.9664 | 18.8156 |
| 0.0013 | 36.0 | 30096 | 0.0871 | 24.739 | 9.5548 | 21.15 | 23.749 | 18.9219 |
| 0.0011 | 37.0 | 30932 | 0.0884 | 24.774 | 9.7848 | 21.2467 | 23.833 | 18.9556 |
| 0.0011 | 38.0 | 31768 | 0.0889 | 25.2656 | 9.9796 | 21.517 | 24.1836 | 18.9462 |
| 0.0011 | 39.0 | 32604 | 0.0895 | 24.6627 | 9.3783 | 20.9288 | 23.5835 | 18.9704 |
| 0.001 | 40.0 | 33440 | 0.0906 | 25.1326 | 9.814 | 21.3593 | 24.0816 | 18.9260 |
| 0.0009 | 41.0 | 34276 | 0.0900 | 25.6889 | 10.3712 | 22.0588 | 24.695 | 18.9731 |
| 0.0008 | 42.0 | 35112 | 0.0911 | 24.6819 | 9.8307 | 21.1335 | 23.7053 | 18.9071 |
| 0.0008 | 43.0 | 35948 | 0.0905 | 24.4835 | 9.7292 | 21.017 | 23.5027 | 18.9623 |
| 0.0007 | 44.0 | 36784 | 0.0910 | 24.8203 | 9.5875 | 21.245 | 23.7718 | 18.9825 |
| 0.0007 | 45.0 | 37620 | 0.0914 | 25.1212 | 10.1024 | 21.6215 | 24.1061 | 18.9771 |
| 0.0006 | 46.0 | 38456 | 0.0914 | 25.1636 | 9.8127 | 21.5343 | 24.13 | 18.9475 |
| 0.0006 | 47.0 | 39292 | 0.0915 | 24.866 | 9.8427 | 21.3531 | 23.8643 | 18.9394 |
| 0.0006 | 48.0 | 40128 | 0.0916 | 25.064 | 10.049 | 21.5198 | 24.1158 | 18.9731 |
| 0.0005 | 49.0 | 40964 | 0.0923 | 24.8424 | 9.9718 | 21.3263 | 23.9031 | 18.9933 |
| 0.0005 | 50.0 | 41800 | 0.0923 | 25.0369 | 10.1572 | 21.5244 | 24.0775 | 18.9946 |
### Framework versions
- Transformers 4.37.1
- Pytorch 1.13.1+cu117
- Datasets 2.15.0
- Tokenizers 0.15.1
|
yeye776/OndeviceAI-large | yeye776 | 2024-02-07T05:57:09Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:paust/pko-t5-large",
"base_model:finetune:paust/pko-t5-large",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-07T05:54:55Z | ---
license: cc-by-4.0
base_model: paust/pko-t5-large
tags:
- generated_from_trainer
model-index:
- name: OndeviceAI-large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# OndeviceAI-large
This model is a fine-tuned version of [paust/pko-t5-large](https://huggingface.co/paust/pko-t5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0007
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
shnl/llama2-13b-vinewsqa | shnl | 2024-02-07T05:27:13Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:manhtt-079/llama-2-13b",
"base_model:adapter:manhtt-079/llama-2-13b",
"region:us"
] | null | 2024-02-07T05:22:51Z | ---
library_name: peft
base_model: manhtt-079/llama-2-13b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
|
cvzion/mistral-dqg-v3 | cvzion | 2024-02-07T05:21:52Z | 0 | 0 | null | [
"safetensors",
"autotrain",
"text-generation",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-07T04:24:52Z | ---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
debajyotidasgupta/convnextv2-base-22k-384 | debajyotidasgupta | 2024-02-07T05:20:08Z | 179 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"convnextv2",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/convnextv2-base-22k-384",
"base_model:finetune:facebook/convnextv2-base-22k-384",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-02-04T15:27:03Z | ---
license: apache-2.0
base_model: facebook/convnextv2-base-22k-384
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- f1
model-index:
- name: convnextv2-base-22k-384
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: F1
type: f1
value: 0.9913113141099743
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnextv2-base-22k-384
This model is a fine-tuned version of [facebook/convnextv2-base-22k-384](https://huggingface.co/facebook/convnextv2-base-22k-384) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0069
- F1: 0.9913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1521 | 1.0 | 202 | 0.0982 | 0.8278 |
| 0.0664 | 2.0 | 404 | 0.0626 | 0.9079 |
| 0.1053 | 3.0 | 606 | 0.0356 | 0.9537 |
| 0.0432 | 4.0 | 808 | 0.0302 | 0.9703 |
| 0.0552 | 5.0 | 1010 | 0.0114 | 0.9827 |
| 0.0352 | 6.0 | 1212 | 0.0131 | 0.9824 |
| 0.0221 | 7.0 | 1414 | 0.0063 | 0.9943 |
| 0.0018 | 8.0 | 1616 | 0.0169 | 0.9824 |
| 0.0283 | 9.0 | 1818 | 0.0028 | 0.9971 |
| 0.0429 | 10.0 | 2020 | 0.0069 | 0.9913 |
### Framework versions
- Transformers 4.37.2
- Pytorch 1.12.1+cu102
- Datasets 2.16.1
- Tokenizers 0.15.1
|
tvjoseph/ABSA1 | tvjoseph | 2024-02-07T05:12:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-07T05:11:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ealvaradob/bert-finetuned-phishing | ealvaradob | 2024-02-07T05:11:47Z | 3,247 | 13 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"phishing",
"BERT",
"en",
"dataset:ealvaradob/phishing-dataset",
"base_model:google-bert/bert-large-uncased",
"base_model:finetune:google-bert/bert-large-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-20T18:31:54Z | ---
license: apache-2.0
base_model: bert-large-uncased
tags:
- generated_from_trainer
- phishing
- BERT
metrics:
- accuracy
- precision
- recall
model-index:
- name: bert-finetuned-phishing
results: []
widget:
- text: https://www.verif22.com
example_title: Phishing URL
- text: Dear colleague, An important update about your email has exceeded your
storage limit. You will not be able to send or receive all of your messages.
We will close all older versions of our Mailbox as of Friday, June 12, 2023.
To activate and complete the required information click here (https://ec-ec.squarespace.com).
Account must be reactivated today to regenerate new space. Management Team
example_title: Phishing Email
- text: You have access to FREE Video Streaming in your plan. REGISTER with your email, password and
then select the monthly subscription option. https://bit.ly/3vNrU5r
example_title: Phishing SMS
- text: if(data.selectedIndex > 0){$('#hidCflag').val(data.selectedData.value);};;
var sprypassword1 = new Spry.Widget.ValidationPassword("sprypassword1");
var sprytextfield1 = new Spry.Widget.ValidationTextField("sprytextfield1", "email");
example_title: Phishing Script
- text: Hi, this model is really accurate :)
example_title: Benign message
datasets:
- ealvaradob/phishing-dataset
language:
- en
pipeline_tag: text-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT FINETUNED ON PHISHING DETECTION
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an [phishing dataset](https://huggingface.co/datasets/ealvaradob/phishing-dataset),
capable of detecting phishing in its four most common forms: URLs, Emails, SMS messages and even websites.
It achieves the following results on the evaluation set:
- Loss: 0.1953
- Accuracy: 0.9717
- Precision: 0.9658
- Recall: 0.9670
- False Positive Rate: 0.0249
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion.
This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why
it can use lots of publicly available data) with an automatic process to generate inputs and labels from
those texts.
This model has the following configuration:
- 24-layer
- 1024 hidden dimension
- 16 attention heads
- 336M parameters
## Motivation and Purpose
Phishing is one of the most frequent and most expensive cyber-attacks according to several security reports.
This model aims to efficiently and accurately prevent phishing attacks against individuals and organizations.
To achieve it, BERT was trained on a diverse and robust dataset containing: URLs, SMS Messages, Emails and
Websites, which allows the model to extend its detection capability beyond the usual and to be used in various
contexts.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | False Positive Rate |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:-------------------:|
| 0.1487 | 1.0 | 3866 | 0.1454 | 0.9596 | 0.9709 | 0.9320 | 0.0203 |
| 0.0805 | 2.0 | 7732 | 0.1389 | 0.9691 | 0.9663 | 0.9601 | 0.0243 |
| 0.0389 | 3.0 | 11598 | 0.1779 | 0.9683 | 0.9778 | 0.9461 | 0.0156 |
| 0.0091 | 4.0 | 15464 | 0.1953 | 0.9717 | 0.9658 | 0.9670 | 0.0249 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.1+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1 |
FinancialSupport/saiga-70b | FinancialSupport | 2024-02-07T05:11:15Z | 8 | 0 | null | [
"gguf",
"it",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-02-06T22:56:40Z | ---
license: apache-2.0
language:
- it
---
il saiga è uno strano incrocio di antilopi che vive nelle steppe siberiane.
Il nome deriva dal fatto che è un parente di fauno/camoscio e un lontano cugino di cerbero (altri modelli open source ita).
E' un progetto portato avanti nei weekend con pochi soldi/tempo a disposizione
 |
ybzz/detr-pothole-augment | ybzz | 2024-02-07T04:56:57Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"detr",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | object-detection | 2024-02-07T04:56:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fionazhang/mistral-finetune-short | fionazhang | 2024-02-07T04:49:37Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-01-29T00:07:01Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: mistral-finetune-short
results: []
---
# mistral-finetune-short
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1).
It achieves the following results on the evaluation set:
- Loss: 2.0377
## Model description
This model is fine-tuned to specialize in generating content related to the environment and sustainability domain. The training involved Supervised Fine-Tuning (SFT), Parameter Efficient Fine-Tuning (PEFT), and Low-Rank Adaptation (LoRA) techniques to optimize model performance. The motivation behind this research is to explore the feasibility and effectiveness of Semantically Sufficient Private Large Language Models (LLMs) for secure, domain-specific knowledge extraction in the context of environment and sustainability.
## Intended uses
The model is intended for information retrieval and knowledge extraction tasks within the domain of environment and sustainability.
## Training and evaluation data
The training data consists of domain-specific text collected from Wikipedia pages related to environmental topics.
This model was trained using the Short dataset. [Model trained with the Long dataset](https://huggingface.co/fionazhang/mistral-finetune-long).
| **Dataset** | **URLs** | **Number of Rows** | **Number of Words** | **Number of Sentences** |
|-------------|----------|--------------------|----------------------|--------------------------|
| Short | 11 | 577 | 51,526 | 2,150 |
| Long | 23 | 1,431 | 124,682 | 5,209 |
**Table 1:** Summary of Dataset Information
### Environment and Sustainability
This model is tailored for the environment and sustainability domain, with a focus on assisting researchers and enterprises, particularly in alignment with the work of the Commonwealth Scientific and Industrial Research Organisation (CSIRO).
### Data Collection Process
The training data was collected through a Python program that extracted and cleaned text content from specific Wikipedia pages related to environmental topics. The program utilized various libraries, such as `requests`, `BeautifulSoup`, and `nltk`, for efficient web scraping, HTML parsing, and natural language processing.
## Training procedure
## Fine-tuning
The fine-tuning process involved Soft Fine-Tuning, PEFT, and LoRA techniques. Soft Fine-Tuning utilized continuous-valued probabilities as labels, suitable for generation models. PEFT focused on updating a small subset of parameters during fine-tuning to prevent catastrophic forgetting. LoRA, a lightweight training technique, reduced the number of trainable parameters for faster and memory-efficient training.
#### Low-Rank Adaptation (LoRA) Parameters
- lora_alpha: 16
- lora_dropout: 0.1
- r: 8
#### Training Parameters
- num_train_epochs: 2
- per_device_train_batch_size: 3
- per_device_eval_batch_size: 3
- gradient_accumulation_steps: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- learning_rate: 5e-05
- weight_decay: 0.001
- max_grad_norm: 0.3
- max_steps: -1
- warmup_ratio: 0.03
- group_by_length: True
- lr_scheduler_type: constant
- seed: 42
### Training results
#### Training Loss

*Figure 1: Training loss curve of model fionazhang/mistral-finetune-short (logging step = 10)*
In the training process, the observed training losses exhibit jittery yet overall decreasing trends. The final evaluation loss reaches a satisfactory value of 2.0377, indicating successful learning and adaptation to the nuances of the provided data.
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.0a0+git7bcf7da
- Datasets 2.16.1
- Tokenizers 0.15.0 |
varun-v-rao/t5-large-bn-adapter-6.34M-snli-model1 | varun-v-rao | 2024-02-07T04:47:48Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"base_model:google-t5/t5-large",
"base_model:finetune:google-t5/t5-large",
"license:apache-2.0",
"region:us"
] | null | 2024-02-06T21:11:35Z | ---
license: apache-2.0
base_model: t5-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: t5-large-bn-adapter-6.34M-snli-model1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-large-bn-adapter-6.34M-snli-model1
This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6034
- Accuracy: 0.8005
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3118 | 1.0 | 17168 | 0.2381 | 0.9150 |
| 0.2742 | 2.0 | 34336 | 0.2299 | 0.9171 |
| 0.2725 | 3.0 | 51504 | 0.2277 | 0.9197 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
varun-v-rao/bert-large-cased-bn-adapter-3.17M-snli-model2 | varun-v-rao | 2024-02-07T04:46:51Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"base_model:google-bert/bert-large-cased",
"base_model:finetune:google-bert/bert-large-cased",
"license:apache-2.0",
"region:us"
] | null | 2024-02-07T02:22:08Z | ---
license: apache-2.0
base_model: bert-large-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-large-cased-bn-adapter-3.17M-snli-model2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-cased-bn-adapter-3.17M-snli-model2
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7747
- Accuracy: 0.731
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.4017 | 1.0 | 8584 | 0.3327 | 0.8763 |
| 0.3769 | 2.0 | 17168 | 0.3069 | 0.8881 |
| 0.3641 | 3.0 | 25752 | 0.3005 | 0.8895 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
AsphyXIA/baarat-hin-en-0.1 | AsphyXIA | 2024-02-07T04:46:11Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-07T04:46:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
varun-v-rao/t5-base-bn-adapter-1.79M-snli-model3 | varun-v-rao | 2024-02-07T04:42:15Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"region:us"
] | null | 2024-02-07T02:16:46Z | ---
license: apache-2.0
base_model: t5-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: t5-base-bn-adapter-1.79M-snli-model3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-bn-adapter-1.79M-snli-model3
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7044
- Accuracy: 0.7455
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 79
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.4101 | 1.0 | 8584 | 0.3336 | 0.8763 |
| 0.3814 | 2.0 | 17168 | 0.3112 | 0.8858 |
| 0.3695 | 3.0 | 25752 | 0.3061 | 0.8883 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
ealvaradob/bert-phishing-text | ealvaradob | 2024-02-07T04:37:15Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"dataset:ealvaradob/phishing-dataset",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-01-28T19:06:47Z | ---
license: apache-2.0
datasets:
- ealvaradob/phishing-dataset
---
<strong><span style="color:red">WARNING ...</span></strong>
This is **NOT** the final BERT model trained for phishing detection. It only corresponds to an evaluation of BERT performance against email and SMS samples.
This model has the following performance in email and SMS phishing detection:
- Accuracy: 0.990318
- Precision: 0.990170
- Recall: 0.984365
- AUC: 0.999146
👇¡CHECK BERT FINAL MODEL FINETUNED FOR PHISHING DETECTION ON THE FOLLOWING LINK!👇
_https://huggingface.co/ealvaradob/bert-finetuned-phishing_ |
Opensourced/wormgpt-24 | Opensourced | 2024-02-07T04:31:50Z | 0 | 6 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-02-07T04:21:04Z | ---
license: apache-2.0
---
from datasets import load_dataset
dataset = load_dataset("suriyagunasekar/stackoverflow-python-with-meta-data") |
Telugu-LLM-Labs/Telugu-Llama2-7B-v0-Instruct | Telugu-LLM-Labs | 2024-02-07T04:24:52Z | 173 | 13 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"te",
"en",
"dataset:Telugu-LLM-Labs/yahma_alpaca_cleaned_telugu_filtered_and_romanized",
"dataset:Telugu-LLM-Labs/teknium_GPTeacher_general_instruct_telugu_filtered_and_romanized",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-06T12:07:42Z | ---
license: llama2
datasets:
- Telugu-LLM-Labs/yahma_alpaca_cleaned_telugu_filtered_and_romanized
- >-
Telugu-LLM-Labs/teknium_GPTeacher_general_instruct_telugu_filtered_and_romanized
language:
- te
- en
---
# Telugu-Llama2-7B-v0-Instruct
This model is based on [Telugu-Llama2-7B-v0-Base](https://huggingface.co/Telugu-LLM-Labs/Telugu-Llama2-7B-v0-Base) and hase been finetuned on instruction datasets:
1. [yahma_alpaca_cleaned_telugu_filtered_and_romanized](https://huggingface.co/datasets/Telugu-LLM-Labs/yahma_alpaca_cleaned_telugu_filtered_and_romanized)
2. [teknium_GPTeacher_general_instruct_telugu_filtered_and_romanized](https://huggingface.co/datasets/Telugu-LLM-Labs/teknium_GPTeacher_general_instruct_telugu_filtered_and_romanized)
# Input Text Format
```
### Instruction: {instruction}
### Input: {input}
## Response: {response}
```
# Usage
## With Romanized Telugu
```python3
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model_name = "Telugu-LLM-Labs/Telugu-Llama2-7B-v0-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name, padding_side="right")
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16).to(device)
instruction = "Krindi samaacharam prakaram google app eppudu release ayyindi?"
input ="Google News is a news aggregator service developed by Google. It presents a continuous flow of links to articles organized from thousands of publishers and magazines. Google News is available as an app on Android, iOS, and the Web. Google released a beta version in September 2002 and the official app in January 2006."
text = f"""Instruction: {instruction} \nInput: {input} \nResponse:"""
encodings = tokenizer(text, padding=True, return_tensors="pt")
encodings = encodings.to(device)
with torch.inference_mode():
outputs = model.generate(encodings.input_ids, do_sample=False, max_new_tokens=500)
output = tokenizer.batch_decode(outputs.detach(), skip_special_tokens=True)
```
### Sample Output:
```
1. September 2002 Google released a beta version of Google News.
2. January 2006 Google released the official version of Google News.
```
## With Native Telugu
```python3
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model_name = "Telugu-LLM-Labs/Telugu-Llama2-7B-v0-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name, padding_side="right")
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16).to(device)
instruction = "కింది వచనాన్ని సంగ్రహించండి"
input="గూగుల్ వార్తలు అనేది గూగుల్ ద్వారా అభివృద్ధి చేయబడిన వార్తా అగ్రిగేటర్ సేవ. ఇది వేలకొద్దీ ప్రచురణకర్తలు మరియు మ్యాగజైన్ల నుండి నిర్వహించబడిన కథనాలకు నిరంతర లింక్లను అందిస్తుంది. గూగుల్ వార్తలు Android, iOS మరియు వెబ్లో యాప్గా అందుబాటులో ఉన్నాయి. గూగుల్ సెప్టెంబరు 2002లో బీటా వెర్షన్ను మరియు జనవరి 2006లో అధికారిక యాప్ను విడుదల చేసింది."
text = f"""Instruction: {instruction} \nInput: {input} \nResponse:"""
encodings = tokenizer(text, padding=True, return_tensors="pt")
encodings = encodings.to(device)
with torch.inference_mode():
outputs = model.generate(encodings.input_ids, do_sample=False, max_new_tokens=500)
output = tokenizer.batch_decode(outputs.detach(), skip_special_tokens=True)
```
### Sample Output:
1. గూగుల్ వార్తలు అనేది గూగుల్ ద్వారా అభివృద్ధి చేయబడిన వార్తా అగ్రిగేటర్ సేవ, వేలకొద్దీ ప్రచురణకర్తలు మరియు మ్యాగజైన్ల నుండి నిర్వహించబడిన కథనాలకు నిరంతర లింక్లను అందిస్తుంది.
2. గూగుల్ సెప్టెంబరు 2002లో బీటా వెర్షన్ మరియు జనవరి 2006లో అధికారిక యాప్ ను విడుదల చేసింది.
# Developers:
The model is a collaborative effort by [Ravi Theja](https://twitter.com/ravithejads) and [Ramsri Goutham](https://twitter.com/ramsri_goutham). Feel free to DM either of us if you have any questions.
# Note:
The model is quite sensitive to parameters and inputs and is not yet ready for production. It remains in the experimental phase, and we recommend using it accordingly. |
sneakykilli/Qatar_BERTopic | sneakykilli | 2024-02-07T04:18:52Z | 3 | 0 | bertopic | [
"bertopic",
"text-classification",
"region:us"
] | text-classification | 2024-02-07T03:52:25Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# Qatar_BERTopic
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("sneakykilli/Qatar_BERTopic")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 22
* Number of training documents: 714
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | doha - qatar - airline - airlines - refund | 5 | -1_doha_qatar_airline_airlines |
| 0 | doha - qatar - airline - airlines - flights | 211 | 0_doha_qatar_airline_airlines |
| 1 | refund - refunded - refunds - booking - voucher | 78 | 1_refund_refunded_refunds_booking |
| 2 | doha - qatar - baggage - luggage - airline | 72 | 2_doha_qatar_baggage_luggage |
| 3 | airline - passengers - flights - attendant - steward | 49 | 3_airline_passengers_flights_attendant |
| 4 | qatar - airline - airlines - flights - carriers | 44 | 4_qatar_airline_airlines_flights |
| 5 | baggage - doha - airlines - airline - luggage | 39 | 5_baggage_doha_airlines_airline |
| 6 | airline - airlines - flights - emirates - flight | 35 | 6_airline_airlines_flights_emirates |
| 7 | refund - airline - flights - flight - cancel | 32 | 7_refund_airline_flights_flight |
| 8 | airline - airlines - seats - qatar - seating | 28 | 8_airline_airlines_seats_qatar |
| 9 | qatar - doha - airlines - flights - emirates | 18 | 9_qatar_doha_airlines_flights |
| 10 | customer - complaints - service - terrible - horrible | 17 | 10_customer_complaints_service_terrible |
| 11 | qatar - complaint - doha - complaints - airline | 15 | 11_qatar_complaint_doha_complaints |
| 12 | avios - qatar - booking - compensation - aviso | 14 | 12_avios_qatar_booking_compensation |
| 13 | airline - airlines - flight - airplane - horrible | 9 | 13_airline_airlines_flight_airplane |
| 14 | doha - qatar - flights - cancellation - airlines | 8 | 14_doha_qatar_flights_cancellation |
| 15 | doha - qatar - qatari - emirates - flight | 8 | 15_doha_qatar_qatari_emirates |
| 16 | doha - qatar - airlines - bangkok - airport | 8 | 16_doha_qatar_airlines_bangkok |
| 17 | seats - seating - airline - booked - seat | 7 | 17_seats_seating_airline_booked |
| 18 | qatar - opodo - airline - refunded - voucher | 6 | 18_qatar_opodo_airline_refunded |
| 19 | doha - qatar - flight - destinations - airways | 6 | 19_doha_qatar_flight_destinations |
| 20 | qatar - airlines - disability - flight - wheelchair | 5 | 20_qatar_airlines_disability_flight |
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: None
* low_memory: False
* min_topic_size: 5
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: False
* zeroshot_min_similarity: 0.7
* zeroshot_topic_list: None
## Framework versions
* Numpy: 1.24.3
* HDBSCAN: 0.8.33
* UMAP: 0.5.5
* Pandas: 2.0.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.3.1
* Transformers: 4.36.2
* Numba: 0.57.1
* Plotly: 5.16.1
* Python: 3.10.12
|
sneakykilli/Singapore_BERTopic | sneakykilli | 2024-02-07T04:18:48Z | 4 | 0 | bertopic | [
"bertopic",
"text-classification",
"region:us"
] | text-classification | 2024-02-07T03:52:40Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# Singapore_BERTopic
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("sneakykilli/Singapore_BERTopic")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 10
* Number of training documents: 160
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | airline - airlines - flights - refund - flight | 6 | -1_airline_airlines_flights_refund |
| 0 | airline - airlines - flights - singapore - meals | 31 | 0_airline_airlines_flights_singapore |
| 1 | refund - airline - airlines - complaint - singapore | 43 | 1_refund_airline_airlines_complaint |
| 2 | baggage - luggage - airlines - airline - bags | 20 | 2_baggage_luggage_airlines_airline |
| 3 | airlines - passengers - seats - flight - cabin | 14 | 3_airlines_passengers_seats_flight |
| 4 | refund - repayment - sia - customer - complaints | 11 | 4_refund_repayment_sia_customer |
| 5 | airlines - airline - fees - singapore - flights | 10 | 5_airlines_airline_fees_singapore |
| 6 | refund - airline - cancellation - booking - cancel | 9 | 6_refund_airline_cancellation_booking |
| 7 | miles - airlines - airline - mileage - loyalty | 9 | 7_miles_airlines_airline_mileage |
| 8 | airline - flight - reviews - booking - customer | 7 | 8_airline_flight_reviews_booking |
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: None
* low_memory: False
* min_topic_size: 5
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: False
* zeroshot_min_similarity: 0.7
* zeroshot_topic_list: None
## Framework versions
* Numpy: 1.24.3
* HDBSCAN: 0.8.33
* UMAP: 0.5.5
* Pandas: 2.0.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.3.1
* Transformers: 4.36.2
* Numba: 0.57.1
* Plotly: 5.16.1
* Python: 3.10.12
|
wentingzhao/question-evaluator | wentingzhao | 2024-02-07T04:12:53Z | 4 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-05T04:50:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
chenhaodev/mistral-7b-medmcqa-inst-v1 | chenhaodev | 2024-02-07T04:06:07Z | 7 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:other",
"region:us"
] | null | 2024-02-07T03:31:34Z | ---
license: other
library_name: peft
tags:
- llama-factory
- lora
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: mistral-7b-medmcqa-inst-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-7b-medmcqa-inst-v1
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the medmcqa_instruct dataset.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.1.1+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
### Performance
hf (pretrained=mistralai/Mistral-7B-v0.1,parallelize=True,load_in_4bit=True,peft=chenhugging/mistral-7b-medmcqa-inst-v1), gen_kwargs: (None), limit: 100.0, num_fewshot: None, batch_size: 1
| Tasks |Version|Filter|n-shot| Metric |Value| |Stderr|
|---------------------|-------|------|-----:|--------|----:|---|-----:|
|pubmedqa | 1|none | 0|acc | 0.98|± |0.0141|
|medmcqa |Yaml |none | 0|acc | 0.48|± |0.0502|
|professional_medicine| 0|none | 0|acc | 0.61|± |0.0490|
|college_medicine | 0|none | 0|acc | 0.57|± |0.0498|
|clinical_knowledge | 0|none | 0|acc | 0.65|± |0.0479|
|ocn |Yaml |none | 0|acc | 0.68|± |0.0469|
|aocnp |Yaml |none | 0|acc | 0.56|± |0.0499|
### Original Performance (mistralai/Mistral-7B-v0.1)
hf (pretrained=mistralai/Mistral-7B-v0.1,parallelize=True,load_in_4bit=True), gen_kwargs: (None), limit: 100.0, num_fewshot: None, batch_size: 1
| Tasks |Version|Filter|n-shot| Metric |Value| |Stderr|
|---------------------|-------|------|-----:|--------|----:|---|-----:|
|pubmedqa | 1|none | 0|acc | 0.98|± |0.0141|
|medmcqa |Yaml |none | 0|acc | 0.45|± |0.0500|
|professional_medicine| 0|none | 0|acc | 0.64|± |0.0482|
|college_medicine | 0|none | 0|acc | 0.65|± |0.0479|
|clinical_knowledge | 0|none | 0|acc | 0.68|± |0.0469|
|ocn |Yaml |none | 0|acc | 0.62|± |0.0488|
|aocnp |Yaml |none | 0|acc | 0.47|± |0.0502|
|
houdini001/nep-spell-mbart-epoch5 | houdini001 | 2024-02-07T03:55:54Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"base_model:houdini001/nep-spell-mbart-epoch3",
"base_model:finetune:houdini001/nep-spell-mbart-epoch3",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-06T19:18:48Z | ---
tags:
- generated_from_trainer
base_model: houdini001/nep-spell-mbart-epoch3
model-index:
- name: nep-spell-mbart-epoch5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nep-spell-mbart-epoch5
This model is a fine-tuned version of [houdini001/nep-spell-mbart-epoch3](https://huggingface.co/houdini001/nep-spell-mbart-epoch3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.0026 | 0.32 | 2000 | 0.0001 |
| 0.0 | 0.63 | 4000 | 0.0001 |
| 0.0 | 0.95 | 6000 | 0.0000 |
| 0.0 | 1.27 | 8000 | 0.0000 |
| 0.0 | 1.58 | 10000 | 0.0000 |
| 0.0 | 1.9 | 12000 | 0.0000 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.1
|
frntcx/Reinforce | frntcx | 2024-02-07T03:50:28Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-07T03:50:21Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 348.70 +/- 57.73
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
humung/koalpaca-polyglot-12.8B-lora-vlending-v0.1 | humung | 2024-02-07T03:49:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-07T03:49:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
weijie210/zephyr-7b-UFB-0 | weijie210 | 2024-02-07T03:49:39Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:alignment-handbook/zephyr-7b-sft-full",
"base_model:finetune:alignment-handbook/zephyr-7b-sft-full",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-07T01:25:02Z | ---
license: apache-2.0
base_model: alignment-handbook/zephyr-7b-sft-full
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: zephyr-7b-UFB-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-UFB-0
This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1492
- Rewards/chosen: -1.5452
- Rewards/rejected: -7.2115
- Rewards/accuracies: 0.8359
- Rewards/margins: 5.6663
- Logps/rejected: -171.0846
- Logps/chosen: -143.6666
- Logits/rejected: -2.3237
- Logits/chosen: -2.3692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.1
- Pytorch 2.0.1+cu117
- Datasets 2.16.1
- Tokenizers 0.15.0
|
LoneStriker/DeepMagic-Coder-7b-AWQ | LoneStriker | 2024-02-07T03:46:40Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] | text-generation | 2024-02-07T03:44:58Z | ---
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL
---
DeepMagic-Coder-7b
Alternate version:
- https://huggingface.co/rombodawg/DeepMagic-Coder-7b-Alt

This is an extremely successful merge of the deepseek-coder-6.7b-instruct and Magicoder-S-DS-6.7B models, bringing an uplift in overall coding performance without any compromise to the models integrity (at least with limited testing).
This is the first of my models to use the merge-kits *task_arithmetic* merging method. The method is detailed bellow, and its clearly very usefull for merging ai models that were fine-tuned from a common base:
Task Arithmetic:
```
Computes "task vectors" for each model by subtracting a base model.
Merges the task vectors linearly and adds back the base.
Works great for models that were fine tuned from a common ancestor.
Also a super useful mental framework for several of the more involved
merge methods.
```
The original models used in this merge can be found here:
- https://huggingface.co/ise-uiuc/Magicoder-S-DS-6.7B
- https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct
The Merge was created using Mergekit and the paremeters can be found bellow:
```yaml
models:
- model: deepseek-ai_deepseek-coder-6.7b-instruct
parameters:
weight: 1
- model: ise-uiuc_Magicoder-S-DS-6.7B
parameters:
weight: 1
merge_method: task_arithmetic
base_model: ise-uiuc_Magicoder-S-DS-6.7B
parameters:
normalize: true
int8_mask: true
dtype: float16
``` |
Subsets and Splits