modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-13 18:27:38
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 518
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-13 18:27:10
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
KingKazma/cnn_dailymail_gpt2_lora_500_10_3000_8_e-1_s6789_v3_l4_r2 | KingKazma | 2023-08-12T20:25:00Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-08-12T20:24:56Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
shanover/medbot_godel_v3 | shanover | 2023-08-12T20:21:14Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-08-12T18:37:32Z | ---
license: mit
widget:
- text: I am having itching, skin rash, and nodal skin eruptions
example_title: Fungal infection example
- text: I feel like vomiting, breathlessness, and sweating
example_title: Heart Attack example
- text: I am feeling fatigue, weight loss, restlessness and also lethargy.
example_title: Diabetes example
---
# Disease Prognosis and Precautions Text2Text Generation
Welcome to the Disease Prognosis and Precautions Text2Text Generation repository! Fine-tuned microsoft/GODEL-v1_1-large-seq2seq. The model is designed to generate responses for disease prognosis and recommended precautions based on given symptoms.
## Model Overview
The model in this repository is a text-to-text generation model. It takes a prompt in the form of symptoms related to a particular disease and generates a response that includes the potential disease prognosis along with recommended precautions. The columns used in the training dataset are:
- **Disease:** The name of the disease related to the symptoms.
- **Symptoms:** The list of symptoms provided in the prompt.
- **Precautions:** The recommended precautions for the identified disease.
## Examples
Here are some examples of how you can use the model:
### Example 1
**Prompt:** "I am feeling continuous sneezing, shivering and chills"
**Response:** "Seems like allergy. You should try to avoid dust and air pollution."
### Example 2
**Prompt:** "I am feeling itching, skin rash and patches"
**Response:** "Seems like fungal infection. You should bathe twice a day and use antifungal soap."
## How to Use
To use the model for generating disease prognosis and precautions based on symptoms, you can use the `generate` function provided by the Hugging Face Transformers library. Here's a basic example using Python:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
# Load the model and tokenizer
model_name = "shanover/medbot_godel_v3"
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Define your symptom prompt
prompt = "I am feeling continuous sneezing, shivering and chills"
def generate_response(input_text, model, tokenizer, max_length):
input_ids = tokenizer.encode(input_text, return_tensors="pt", max_length=max_length, truncation=True)
input_ids = input_ids.to(device)
with torch.no_grad():
output_ids = model.generate(input_ids)
generated_text = tokenizer.decode(output_ids[0], skip_special_tokens=True)
return generated_text
print(generate_response(prompt, model, tokenizer))
```
Remember to replace `"shanover/medbot_godel_v3"` with the actual name or path of the model you've downloaded or fine-tuned.
## Acknowledgments
Trained on Microsoft/Godel: https://huggingface.co/microsoft/GODEL-v1_1-large-seq2seq
## Issues and Contributions
If you encounter any issues while using the model or have suggestions for improvements, please feel free to open an issue in this repository. Contributions are also welcome!
## Disclaimer
Please note that the information generated by the model is for informational purposes only and should not be considered a substitute for professional medical advice. Always consult a medical professional for accurate diagnoses and treatments.
Thank you for using the Disease Prognosis and Precautions Text2Text Generation model! We hope it proves to be a helpful tool. |
MRNH/ppo-SoccerTwos | MRNH | 2023-08-12T20:15:04Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-08-12T20:14:46Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: MRNH/ppo-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
EmirhanExecute/Reinforce-Pixelcopter | EmirhanExecute | 2023-08-12T19:55:09Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-08-12T19:29:17Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: -2.40 +/- 0.49
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Shahzay/bert-base-banking77-pt2 | Shahzay | 2023-08-12T19:54:49Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:banking77",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-08-12T12:57:22Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- banking77
metrics:
- f1
model-index:
- name: bert-base-banking77-pt2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: banking77
type: banking77
config: default
split: test
args: default
metrics:
- name: F1
type: f1
value: 0.9348825614500316
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-banking77-pt2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the banking77 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2956
- F1: 0.9349
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6757 | 1.0 | 1251 | 0.5184 | 0.8868 |
| 0.265 | 2.0 | 2502 | 0.3234 | 0.9207 |
| 0.1595 | 3.0 | 3753 | 0.2986 | 0.9375 |
| 0.034 | 4.0 | 5004 | 0.2956 | 0.9349 |
### Framework versions
- Transformers 4.27.1
- Pytorch 2.0.1+cu118
- Datasets 2.9.0
- Tokenizers 0.13.3
|
RazzzHF/sashabanks | RazzzHF | 2023-08-12T19:49:37Z | 0 | 1 | null | [
"license:cc-by-nc-sa-4.0",
"region:us"
]
| null | 2023-08-12T19:42:48Z | ---
license: cc-by-nc-sa-4.0
---
|
mabrar4281/paraphraser-bart-large-finetuned-chatgptphrases | mabrar4281 | 2023-08-12T19:35:45Z | 105 | 2 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-08-12T19:26:49Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: paraphraser-bart-large-finetuned-chatgptphrases
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# paraphraser-bart-large-finetuned-chatgptphrases
This model is a fine-tuned version of [stanford-oval/paraphraser-bart-large](https://huggingface.co/stanford-oval/paraphraser-bart-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0972
- Rouge1: 9.1735
- Rouge2: 5.1237
- Rougel: 7.5344
- Rougelsum: 7.5475
- Gen Len: 19.3333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 38 | 1.0972 | 9.1735 | 5.1237 | 7.5344 | 7.5475 | 19.3333 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
beaugogh/Llama2-7b-sharegpt4 | beaugogh | 2023-08-12T19:23:44Z | 1,506 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-08-12T19:15:51Z | ---
license: apache-2.0
---
Llama2-7b fully finetuned on Openchat's sharegpt4 dataset |
MredK/evrenv1 | MredK | 2023-08-12T19:19:23Z | 0 | 0 | null | [
"license:openrail",
"region:us"
]
| null | 2023-08-12T19:16:27Z | ---
license: openrail
---
4 Dklık Dataset İle Yapıldı\
Train Bana Aittir\
250 Epoch\
Türkçe Model\ |
camus-ng/lora-trained-xl-cory-1 | camus-ng | 2023-08-12T19:03:23Z | 0 | 1 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| text-to-image | 2023-08-12T17:31:15Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks man
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - camus-ng/lora-trained-xl-cory-1
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sks man using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
mrkusypl/Kamil-Steinbach | mrkusypl | 2023-08-12T18:59:08Z | 0 | 0 | null | [
"pl",
"region:us"
]
| null | 2023-08-09T21:21:41Z | ---
language:
- pl
---
<center>
<img src="https://cdn.discordapp.com/attachments/1138952916259057776/1138952916804313160/x1080.png"></img>
<h1>Kamil Steinbach (RVC v2) (Mangio Crepe 64) (800 Epochs)</h1>
**Model by:** kusy <br/>
**Voice Actor:** Kamil Steinbach <br/>
**Dataset:** 00:19:41 <br/>
<audio controls>
<source src="https://cdn.discordapp.com/attachments/1138952916259057776/1138952944700633229/example.mp3" type="audio/mpeg">
</audio><br />
<audio controls>
<source src="https://cdn.discordapp.com/attachments/1138952916259057776/1138953959034339468/gadanie.wav" type="audio/wav">
</audio>
<a href="https://huggingface.co/mrkusypl/Kamil-Steinbach/resolve/main/Kamil%20Steinbach%20%5B800%20epoch%20%2B%20RVC%20v2%5D.zip">Download or copy the link</a>
</center> |
lomahony/eleuther-pythia6.9b-hh-dpo | lomahony | 2023-08-12T18:42:01Z | 20 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"causal-lm",
"pythia",
"en",
"dataset:Anthropic/hh-rlhf",
"arxiv:2305.18290",
"arxiv:2101.00027",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-08-07T13:17:08Z | ---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- Anthropic/hh-rlhf
---
[Pythia-6.9b](https://huggingface.co/EleutherAI/pythia-6.9b) supervised finetuned with [Anthropic-hh-rlhf dataset](https://huggingface.co/datasets/Anthropic/hh-rlhf) for 1 epoch [(sft-model)](https://huggingface.co/lomahony/eleuther-pythia6.9b-hh-sft),
before DPO [(paper)](https://arxiv.org/abs/2305.18290) with same dataset for 1 epoch.
[wandb log](https://wandb.ai/pythia_dpo/Pythia_LOM/runs/i53fbcih?workspace=user-lauraomahony)
Benchmark evaluations included in repo done using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/big-refactor).
See [Pythia-6.9b](https://huggingface.co/EleutherAI/pythia-6.9b) for original model details [(paper)](https://arxiv.org/abs/2101.00027).
|
lomahony/eleuther-pythia6.9b-hh-sft | lomahony | 2023-08-12T18:41:18Z | 73 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"causal-lm",
"pythia",
"en",
"dataset:Anthropic/hh-rlhf",
"arxiv:2101.00027",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-08-07T13:16:56Z | ---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- Anthropic/hh-rlhf
---
[Pythia-6.9b](https://huggingface.co/EleutherAI/pythia-6.9b) supervised finetuned with [Anthropic-hh-rlhf dataset](https://huggingface.co/datasets/Anthropic/hh-rlhf) for 1 epoch.
[wandb log](https://wandb.ai/pythia_dpo/Pythia_LOM/runs/vd9e3it0?workspace=user-lauraomahony)
Benchmark evaluations included in repo done using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/big-refactor).
See [Pythia-6.9b](https://huggingface.co/EleutherAI/pythia-6.9b) for model details [(paper)](https://arxiv.org/abs/2101.00027).
|
lomahony/eleuther-pythia410m-hh-dpo | lomahony | 2023-08-12T18:38:34Z | 192 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"causal-lm",
"pythia",
"en",
"dataset:Anthropic/hh-rlhf",
"arxiv:2305.18290",
"arxiv:2101.00027",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-07-21T11:36:00Z | ---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- Anthropic/hh-rlhf
---
[Pythia-410m](https://huggingface.co/EleutherAI/pythia-410m) supervised finetuned with [Anthropic-hh-rlhf dataset](https://huggingface.co/datasets/Anthropic/hh-rlhf) for 1 epoch [(sft-model)](https://huggingface.co/lomahony/eleuther-pythia70m-hh-sft), before DPO [(paper)](https://arxiv.org/abs/2305.18290) with same dataset for 1 epoch.
[wandb log](https://wandb.ai/pythia_dpo/Pythia_LOM/runs/b3mh4jba?workspace=user-lauraomahony)
Benchmark evaluations included in repo done using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/big-refactor).
See [Pythia-410m](https://huggingface.co/EleutherAI/pythia-410m) for original model details [(paper)](https://arxiv.org/abs/2101.00027).
|
Ryukijano/lora-trained-xl-colab | Ryukijano | 2023-08-12T18:33:36Z | 1 | 2 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| text-to-image | 2023-08-12T17:00:12Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of an elf from baldurs gate
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Ryukijano/lora-trained-xl-colab
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a corpus of photos using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
lomahony/eleuther-pythia160m-hh-sft | lomahony | 2023-08-12T18:28:03Z | 196 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"causal-lm",
"pythia",
"en",
"dataset:Anthropic/hh-rlhf",
"arxiv:2101.00027",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-07-21T11:27:13Z | ---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- Anthropic/hh-rlhf
---
[Pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) supervised finetuned with [Anthropic-hh-rlhf dataset](https://huggingface.co/datasets/Anthropic/hh-rlhf) for 1 epoch.
[wandb log](https://wandb.ai/pythia_dpo/Pythia_LOM/runs/z78b7xqm?workspace=user-lauraomahony)
Benchmark evaluations included in repo done using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/big-refactor).
See [Pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) for model details [(paper)](https://arxiv.org/abs/2101.00027).
|
flavour/CLIP-ViT-B-16-DataComp.XL-s13B-b90K | flavour | 2023-08-12T18:25:30Z | 4,609 | 1 | open_clip | [
"open_clip",
"pytorch",
"safetensors",
"clip",
"zero-shot-image-classification",
"dataset:mlfoundations/datacomp_pools",
"arxiv:2304.14108",
"license:mit",
"region:us"
]
| zero-shot-image-classification | 2023-07-27T10:09:33Z | ---
license: mit
widget:
- src: >-
https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png
candidate_labels: playing music, playing sports
example_title: Cat & Dog
library_name: open_clip
datasets:
- mlfoundations/datacomp_pools
pipeline_tag: zero-shot-image-classification
---
Original Repo https://huggingface.co/laion/CLIP-ViT-L-14-DataComp.XL-s13B-b90K
Added transformers supports
```python
from transformers import CLIPProcessor, CLIPModel
model = CLIPModel.from_pretrained("marcusinthesky/CLIP-ViT-L-14-DataComp.XL-s13B-b90K")
```
# Model card for CLIP ViT-L-14 trained DataComp-1B
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Training Details](#training-details)
4. [Evaluation](#evaluation)
5. [Acknowledgements](#acknowledgements)
6. [Citation](#citation)
7. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
A CLIP ViT-L/14 model trained with the DataComp-1B (https://github.com/mlfoundations/datacomp) using OpenCLIP (https://github.com/mlfoundations/open_clip).
Model training done on the [stability.ai](https://stability.ai/) cluster.
# Uses
As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model.
The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. Additionally, the DataComp paper (https://arxiv.org/abs/2304.14108) include additional discussion as it relates specifically to the training dataset.
## Direct Use
Zero-shot image classification, image and text retrieval, among others.
## Downstream Use
Image classification and other image task fine-tuning, linear probe image classification, image generation guiding and conditioning, among others.
## Out-of-Scope Use
As per the OpenAI models,
**Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful.
Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use.
# Training Details
## Training Data
This model was trained with the 1.4 Billion samples of the DataComp-1B dataset (https://arxiv.org/abs/2304.14108).
**IMPORTANT NOTE:** The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes. Be aware that this large-scale dataset is uncurated. Keep in mind that the uncurated nature of the dataset means that collected links may lead to strongly discomforting and disturbing content for a human viewer. Therefore, please use the demo links with caution and at your own risk. It is possible to extract a “safe” subset by filtering out samples based on the safety tags (using a customized trained NSFW classifier that we built). While this strongly reduces the chance for encountering potentially harmful content when viewing, we cannot entirely exclude the possibility for harmful content being still present in safe mode, so that the warning holds also there. We think that providing the dataset openly to broad research and other interested communities will allow for transparent investigation of benefits that come along with training large-scale models as well as pitfalls and dangers that may stay unreported or unnoticed when working with closed large datasets that remain restricted to a small community. Providing our dataset openly, we however do not recommend using it for creating ready-to-go industrial products, as the basic research about general properties and safety of such large-scale models, which we would like to encourage with this release, is still in progress.
## Training Procedure
Please see https://arxiv.org/abs/2304.14108.
# Evaluation
Evaluation done on 38 datasets, using the [DataComp repo](https://github.com/mlfoundations/datacomp) and the [LAION CLIP Benchmark](https://github.com/LAION-AI/CLIP_benchmark).
## Testing Data, Factors & Metrics
### Testing Data
The testing is performed on a suite of 38 datasets. See our paper for more details (https://arxiv.org/abs/2304.14108).
## Results
The model achieves a 79.2% zero-shot top-1 accuracy on ImageNet-1k. See our paper for more details and results (https://arxiv.org/abs/2304.14108).
# Acknowledgements
Acknowledging [stability.ai](https://stability.ai/) for the compute used to train this model.
# Citation
**BibTeX:**
DataComp
```bibtex
@article{datacomp,
title={DataComp: In search of the next generation of multimodal datasets},
author={Samir Yitzhak Gadre, Gabriel Ilharco, Alex Fang, Jonathan Hayase, Georgios Smyrnis, Thao Nguyen, Ryan Marten, Mitchell Wortsman, Dhruba Ghosh, Jieyu Zhang, Eyal Orgad, Rahim Entezari, Giannis Daras, Sarah Pratt, Vivek Ramanujan, Yonatan Bitton, Kalyani Marathe, Stephen Mussmann, Richard Vencu, Mehdi Cherti, Ranjay Krishna, Pang Wei Koh, Olga Saukh, Alexander Ratner, Shuran Song, Hannaneh Hajishirzi, Ali Farhadi, Romain Beaumont, Sewoong Oh, Alex Dimakis, Jenia Jitsev, Yair Carmon, Vaishaal Shankar, Ludwig Schmidt},
journal={arXiv preprint arXiv:2304.14108},
year={2023}
}
```
OpenAI CLIP paper
```
@inproceedings{Radford2021LearningTV,
title={Learning Transferable Visual Models From Natural Language Supervision},
author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
booktitle={ICML},
year={2021}
}
```
OpenCLIP software
```
@software{ilharco_gabriel_2021_5143773,
author = {Ilharco, Gabriel and
Wortsman, Mitchell and
Wightman, Ross and
Gordon, Cade and
Carlini, Nicholas and
Taori, Rohan and
Dave, Achal and
Shankar, Vaishaal and
Namkoong, Hongseok and
Miller, John and
Hajishirzi, Hannaneh and
Farhadi, Ali and
Schmidt, Ludwig},
title = {OpenCLIP},
month = jul,
year = 2021,
note = {If you use this software, please cite it as below.},
publisher = {Zenodo},
version = {0.1},
doi = {10.5281/zenodo.5143773},
url = {https://doi.org/10.5281/zenodo.5143773}
}
```
# How to Get Started with the Model
See https://github.com/mlfoundations/open_clip |
lomahony/eleuther-pythia70m-hh-dpo | lomahony | 2023-08-12T18:24:49Z | 200 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"causal-lm",
"pythia",
"en",
"dataset:Anthropic/hh-rlhf",
"arxiv:2305.18290",
"arxiv:2101.00027",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-07-21T11:19:45Z | ---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- Anthropic/hh-rlhf
---
[Pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) supervised finetuned with [Anthropic-hh-rlhf dataset](https://huggingface.co/datasets/Anthropic/hh-rlhf) for 1 epoch [(sft-model)](https://huggingface.co/lomahony/eleuther-pythia70m-hh-sft),
before DPO [(paper)](https://arxiv.org/abs/2305.18290) with same dataset for 1 epoch.
[wandb log](https://wandb.ai/pythia_dpo/Pythia_LOM/runs/p5itdbbe?workspace=user-lauraomahony)
Benchmark evaluations included in repo done using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/big-refactor).
See [Pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) for original model details [(paper)](https://arxiv.org/abs/2101.00027).
|
LarryAIDraw/zxq-f | LarryAIDraw | 2023-08-12T18:18:15Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-08-12T18:04:07Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/126988/oreki-x-chitanda-hyouka-x |
LarryAIDraw/Tachibana_Chizuru | LarryAIDraw | 2023-08-12T18:17:56Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-08-12T18:08:28Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/126905/tachibana-chizuru-why-the-hell-are-you-here-teacher |
LarryAIDraw/SuffrenV1_0 | LarryAIDraw | 2023-08-12T18:17:29Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-08-12T18:08:01Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/126913/suffren-or-azur-lane-or |
LarryAIDraw/bakemonogatari_kissshot-10 | LarryAIDraw | 2023-08-12T18:17:19Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-08-12T18:07:38Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/126614/kiss-shot-acerola-orion-heart-under-blade-or-bakemonogatari |
LarryAIDraw/amano_erika_v1 | LarryAIDraw | 2023-08-12T18:16:47Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-08-12T18:07:19Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/126784/amano-erika-kakkou-no-iinazuke |
LarryAIDraw/yanfei-03 | LarryAIDraw | 2023-08-12T18:16:31Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-08-12T18:07:01Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/126773/yanfei-or-genshin-impact-or-lora-or-pop-waifu |
LarryAIDraw/baltimore_v1 | LarryAIDraw | 2023-08-12T18:15:47Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-08-12T18:05:42Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/127285/baltimore-azur-lane |
LarryAIDraw/summerhan-20 | LarryAIDraw | 2023-08-12T18:14:54Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-08-12T18:04:28Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/127246/baobhan-sith-summer-oufit-fate-grand-order |
LarryAIDraw/swordmaidenv2 | LarryAIDraw | 2023-08-12T18:14:31Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-08-12T18:03:43Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/11416/sword-maiden-goblin-slayer |
InexperiencedMe/LunarLanderPPO | InexperiencedMe | 2023-08-12T18:07:06Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-08-12T18:06:44Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 275.73 +/- 23.18
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
davzoku/cria-llama2-7b-v1.1 | davzoku | 2023-08-12T17:56:15Z | 2 | 0 | peft | [
"peft",
"pytorch",
"llama",
"region:us"
]
| null | 2023-08-12T14:36:18Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
rodriguezj314/pixel_train_hoodie_model_beta | rodriguezj314 | 2023-08-12T17:55:45Z | 28 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-08-12T17:42:33Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks hoodie
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - rodriguezj314/pixel_train_hoodie_model_beta
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks hoodie using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
ultimateabhi/de-en-1 | ultimateabhi | 2023-08-12T17:51:03Z | 0 | 0 | null | [
"de",
"en",
"dataset:wmt14",
"arxiv:1910.09700",
"license:bsd",
"region:us"
]
| null | 2023-08-12T17:27:12Z | ---
license: bsd
datasets:
- wmt14
language:
- de
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ReporterMarina/autotrain-more_brokers-81853141865 | ReporterMarina | 2023-08-12T17:43:50Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"deberta",
"text-classification",
"autotrain",
"en",
"dataset:ReporterMarina/autotrain-data-more_brokers",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-08-12T17:43:22Z | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain"
datasets:
- ReporterMarina/autotrain-data-more_brokers
co2_eq_emissions:
emissions: 0.13075876324223337
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 81853141865
- CO2 Emissions (in grams): 0.1308
## Validation Metrics
- Loss: 1.096
- Accuracy: 0.333
- Macro F1: 0.167
- Micro F1: 0.333
- Weighted F1: 0.167
- Macro Precision: 0.111
- Micro Precision: 0.333
- Weighted Precision: 0.111
- Macro Recall: 0.333
- Micro Recall: 0.333
- Weighted Recall: 0.333
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/ReporterMarina/autotrain-more_brokers-81853141865
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ReporterMarina/autotrain-more_brokers-81853141865", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("ReporterMarina/autotrain-more_brokers-81853141865", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Anonymous-1122/Anonymous-1122 | Anonymous-1122 | 2023-08-12T17:19:02Z | 0 | 0 | null | [
"region:us"
]
| null | 2023-08-12T16:11:34Z | ## LLaMA-Adapter Multimodal
### Installation
```
conda create -n llama_adapter python=3.9 -y
conda activate llama_adapter
pip install -r requirements.txt
```
### Web Demo
Make sure the pretrained weight is at ```./llama_adapter_multimodal_full```
and run web demo:
```
python app.py
``` |
oml1111/ppo-LunarLander | oml1111 | 2023-08-12T17:16:40Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-08-12T17:16:18Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 251.27 +/- 20.01
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
HamZurger/DQN_SpaceInvadersNoFrameskip | HamZurger | 2023-08-12T17:16:33Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-08-12T17:16:03Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 365.00 +/- 106.32
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga HamZurger -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga HamZurger -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga HamZurger
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 500000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
davzoku/cria-llama2-7b-v1.1_peft | davzoku | 2023-08-12T17:00:36Z | 44 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-08-12T17:00:27Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
abhijeet2022/ppo-LunarLander-v2 | abhijeet2022 | 2023-08-12T16:45:07Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-08-12T16:44:48Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 276.55 +/- 20.10
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
fengtc/chatglm2-6b | fengtc | 2023-08-12T16:40:52Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"chatglm",
"glm",
"thudm",
"custom_code",
"zh",
"en",
"arxiv:2103.10360",
"arxiv:2210.02414",
"arxiv:1911.02150",
"endpoints_compatible",
"region:us"
]
| null | 2023-08-12T16:22:06Z | ---
language:
- zh
- en
tags:
- glm
- chatglm
- thudm
---
# ChatGLM2-6B
<p align="center">
💻 <a href="https://github.com/THUDM/ChatGLM2-6B" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/thukeg" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2103.10360" target="_blank">[GLM@ACL 22]</a> <a href="https://github.com/THUDM/GLM" target="_blank">[GitHub]</a> • 📃 <a href="https://arxiv.org/abs/2210.02414" target="_blank">[GLM-130B@ICLR 23]</a> <a href="https://github.com/THUDM/GLM-130B" target="_blank">[GitHub]</a> <br>
</p>
<p align="center">
👋 Join our <a href="https://join.slack.com/t/chatglm/shared_invite/zt-1y7pqoloy-9b1g6T6JjA8J0KxvUjbwJw" target="_blank">Slack</a> and <a href="https://github.com/THUDM/ChatGLM-6B/blob/main/resources/WECHAT.md" target="_blank">WeChat</a>
</p>
## 介绍
ChatGLM**2**-6B 是开源中英双语对话模型 [ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B) 的第二代版本,在保留了初代模型对话流畅、部署门槛较低等众多优秀特性的基础之上,ChatGLM**2**-6B 引入了如下新特性:
1. **更强大的性能**:基于 ChatGLM 初代模型的开发经验,我们全面升级了 ChatGLM2-6B 的基座模型。ChatGLM2-6B 使用了 [GLM](https://github.com/THUDM/GLM) 的混合目标函数,经过了 1.4T 中英标识符的预训练与人类偏好对齐训练,[评测结果](#评测结果)显示,相比于初代模型,ChatGLM2-6B 在 MMLU(+23%)、CEval(+33%)、GSM8K(+571%) 、BBH(+60%)等数据集上的性能取得了大幅度的提升,在同尺寸开源模型中具有较强的竞争力。
2. **更长的上下文**:基于 [FlashAttention](https://github.com/HazyResearch/flash-attention) 技术,我们将基座模型的上下文长度(Context Length)由 ChatGLM-6B 的 2K 扩展到了 32K,并在对话阶段使用 8K 的上下文长度训练,允许更多轮次的对话。但当前版本的 ChatGLM2-6B 对单轮超长文档的理解能力有限,我们会在后续迭代升级中着重进行优化。
3. **更高效的推理**:基于 [Multi-Query Attention](http://arxiv.org/abs/1911.02150) 技术,ChatGLM2-6B 有更高效的推理速度和更低的显存占用:在官方的模型实现下,推理速度相比初代提升了 42%,INT4 量化下,6G 显存支持的对话长度由 1K 提升到了 8K。
4. **更开放的协议**:ChatGLM2-6B 权重对学术研究**完全开放**,在填写[问卷](https://open.bigmodel.cn/mla/form)进行登记后**亦允许免费商业使用**。
ChatGLM**2**-6B is the second-generation version of the open-source bilingual (Chinese-English) chat model [ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B). It retains the smooth conversation flow and low deployment threshold of the first-generation model, while introducing the following new features:
1. **Stronger Performance**: Based on the development experience of the first-generation ChatGLM model, we have fully upgraded the base model of ChatGLM2-6B. ChatGLM2-6B uses the hybrid objective function of [GLM](https://github.com/THUDM/GLM), and has undergone pre-training with 1.4T bilingual tokens and human preference alignment training. The [evaluation results](README.md#evaluation-results) show that, compared to the first-generation model, ChatGLM2-6B has achieved substantial improvements in performance on datasets like MMLU (+23%), CEval (+33%), GSM8K (+571%), BBH (+60%), showing strong competitiveness among models of the same size.
2. **Longer Context**: Based on [FlashAttention](https://github.com/HazyResearch/flash-attention) technique, we have extended the context length of the base model from 2K in ChatGLM-6B to 32K, and trained with a context length of 8K during the dialogue alignment, allowing for more rounds of dialogue. However, the current version of ChatGLM2-6B has limited understanding of single-round ultra-long documents, which we will focus on optimizing in future iterations.
3. **More Efficient Inference**: Based on [Multi-Query Attention](http://arxiv.org/abs/1911.02150) technique, ChatGLM2-6B has more efficient inference speed and lower GPU memory usage: under the official implementation, the inference speed has increased by 42% compared to the first generation; under INT4 quantization, the dialogue length supported by 6G GPU memory has increased from 1K to 8K.
4. **More Open License**: ChatGLM2-6B weights are **completely open** for academic research, and **free commercial use** is also allowed after completing the [questionnaire](https://open.bigmodel.cn/mla/form).
## 软件依赖
```shell
pip install protobuf transformers==4.30.2 cpm_kernels torch>=2.0 gradio mdtex2html sentencepiece accelerate
```
## 代码调用
可以通过如下代码调用 ChatGLM-6B 模型来生成对话:
```ipython
>>> from transformers import AutoTokenizer, AutoModel
>>> tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm2-6b", trust_remote_code=True)
>>> model = AutoModel.from_pretrained("THUDM/chatglm2-6b", trust_remote_code=True).half().cuda()
>>> model = model.eval()
>>> response, history = model.chat(tokenizer, "你好", history=[])
>>> print(response)
你好👋!我是人工智能助手 ChatGLM-6B,很高兴见到你,欢迎问我任何问题。
>>> response, history = model.chat(tokenizer, "晚上睡不着应该怎么办", history=history)
>>> print(response)
晚上睡不着可能会让你感到焦虑或不舒服,但以下是一些可以帮助你入睡的方法:
1. 制定规律的睡眠时间表:保持规律的睡眠时间表可以帮助你建立健康的睡眠习惯,使你更容易入睡。尽量在每天的相同时间上床,并在同一时间起床。
2. 创造一个舒适的睡眠环境:确保睡眠环境舒适,安静,黑暗且温度适宜。可以使用舒适的床上用品,并保持房间通风。
3. 放松身心:在睡前做些放松的活动,例如泡个热水澡,听些轻柔的音乐,阅读一些有趣的书籍等,有助于缓解紧张和焦虑,使你更容易入睡。
4. 避免饮用含有咖啡因的饮料:咖啡因是一种刺激性物质,会影响你的睡眠质量。尽量避免在睡前饮用含有咖啡因的饮料,例如咖啡,茶和可乐。
5. 避免在床上做与睡眠无关的事情:在床上做些与睡眠无关的事情,例如看电影,玩游戏或工作等,可能会干扰你的睡眠。
6. 尝试呼吸技巧:深呼吸是一种放松技巧,可以帮助你缓解紧张和焦虑,使你更容易入睡。试着慢慢吸气,保持几秒钟,然后缓慢呼气。
如果这些方法无法帮助你入睡,你可以考虑咨询医生或睡眠专家,寻求进一步的建议。
```
关于更多的使用说明,包括如何运行命令行和网页版本的 DEMO,以及使用模型量化以节省显存,请参考我们的 [Github Repo](https://github.com/THUDM/ChatGLM2-6B)。
For more instructions, including how to run CLI and web demos, and model quantization, please refer to our [Github Repo](https://github.com/THUDM/ChatGLM2-6B).
## Change Log
* v1.0
## 协议
本仓库的代码依照 [Apache-2.0](LICENSE) 协议开源,ChatGLM2-6B 模型的权重的使用则需要遵循 [Model License](MODEL_LICENSE)。
## 引用
如果你觉得我们的工作有帮助的话,请考虑引用下列论文,ChatGLM2-6B 的论文会在近期公布,敬请期待~
```
@article{zeng2022glm,
title={Glm-130b: An open bilingual pre-trained model},
author={Zeng, Aohan and Liu, Xiao and Du, Zhengxiao and Wang, Zihan and Lai, Hanyu and Ding, Ming and Yang, Zhuoyi and Xu, Yifan and Zheng, Wendi and Xia, Xiao and others},
journal={arXiv preprint arXiv:2210.02414},
year={2022}
}
```
```
@inproceedings{du2022glm,
title={GLM: General Language Model Pretraining with Autoregressive Blank Infilling},
author={Du, Zhengxiao and Qian, Yujie and Liu, Xiao and Ding, Ming and Qiu, Jiezhong and Yang, Zhilin and Tang, Jie},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
pages={320--335},
year={2022}
}
``` |
CheeriosMomentors/SDXL-Loras | CheeriosMomentors | 2023-08-12T16:33:36Z | 0 | 0 | null | [
"art",
"en",
"license:wtfpl",
"region:us"
]
| null | 2023-08-12T16:05:29Z | ---
license: wtfpl
language:
- en
tags:
- art
---
I just put this here so it won't be as annoying once I put my loras in the Google Collab |
SargeZT/t2i-adapter-sdxl-segmentation | SargeZT | 2023-08-12T16:24:16Z | 789 | 13 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"t2i_adapter",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-08-02T17:24:57Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-xl-base-1.0
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- t2i_adapter
inference: true
---
# T2I_Adapter-SargeZT/t2i-adapter-sdxl-segmentation
These are T2I-adapter weights trained on stabilityai/stable-diffusion-xl-base-1.0. Please note this uses custom code to initialize the T2I. [You can find the adapter, pipeline, and training code here.](https://github.com/AMorporkian/T2IAdapter-SDXL-Diffusers)
You can find some example images below.
prompt: ['a cat laying on top of a blanket on a bed']

prompt: ['two elephants are walking in a zoo enclosure']

prompt: ['a variety of items are laid out on a table']

prompt: ['a sandwich and french fries on a tray']

prompt: ['a crowd of people flying kites on a beach']

prompt: ['a man holding a rainbow colored umbrella in front of a crowd']

prompt: ['a man riding skis down a snow covered slope']

prompt: ['a yellow fire hydrant sitting in the middle of a sidewalk']

## License
[SDXL 1.0 License](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/LICENSE.md)
|
rodriguezj314/pixel_train_model_beta | rodriguezj314 | 2023-08-12T16:12:41Z | 25 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-08-12T15:29:42Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - rodriguezj314/pixel_train_model_beta
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
bladevb/whisper-small-dv | bladevb | 2023-08-12T16:08:02Z | 75 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dv",
"dataset:mozilla-foundation/common_voice_13_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-08-12T14:46:56Z | ---
language:
- dv
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Small Dv - Sanchit Gandhi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: dv
split: test
args: dv
metrics:
- name: Wer
type: wer
value: 12.939458218868449
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Dv - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1669
- Wer Ortho: 61.8776
- Wer: 12.9395
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.1231 | 1.63 | 500 | 0.1669 | 61.8776 | 12.9395 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
alxxtexxr/WizardLM-13B-v1.2-Sharded-8GB | alxxtexxr | 2023-08-12T16:03:43Z | 18 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2304.12244",
"arxiv:2306.08568",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-08-12T10:47:01Z | ---
license: llama2
pipeline_tag: text-generation
---
# Disclaimer: I do not own the weights of WizardLM-13B-V1.2, nor did I train the model. I only sharded or split the model weights.
The actual weights can be found [here](https://huggingface.co/WizardLM/WizardLM-13B-V1.2).
The rest of the README is copied from the same page listed above.
This is the **Full-Weight** of WizardLM-13B V1.2 model, this model is trained from **Llama-2 13b**.
## WizardLM: Empowering Large Pre-Trained Language Models to Follow Complex Instructions
<p align="center">
🤗 <a href="https://huggingface.co/WizardLM" target="_blank">HF Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> <br>
</p>
<p align="center">
👋 Join our <a href="https://discord.gg/bpmeZD7V" target="_blank">Discord</a>
</p>
<font size=4>
| <sup>Model</sup> | <sup>Checkpoint</sup> | <sup>Paper</sup> |<sup>MT-Bench</sup> | <sup>AlpacaEval</sup> | <sup>WizardEval</sup> | <sup>HumanEval</sup> | <sup>License</sup>|
| ----- |------| ---- |------|-------| ----- | ----- | ----- |
| <sup>WizardLM-13B-V1.2</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.2" target="_blank">HF Link</a> </sup>| | <sup>7.06</sup> | <sup>89.17%</sup> | <sup>101.4% </sup>|<sup>36.6 pass@1</sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> |
| <sup>WizardLM-13B-V1.1</sup> |<sup> 🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.1" target="_blank">HF Link</a> </sup> | | <sup>6.76</sup> |<sup>86.32%</sup> | <sup>99.3% </sup> |<sup>25.0 pass@1</sup>| <sup>Non-commercial</sup>|
| <sup>WizardLM-30B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-30B-V1.0" target="_blank">HF Link</a></sup> | | <sup>7.01</sup> | | <sup>97.8% </sup> | <sup>37.8 pass@1</sup>| <sup>Non-commercial</sup> |
| <sup>WizardLM-13B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.0" target="_blank">HF Link</a> </sup> | | <sup>6.35</sup> | <sup>75.31%</sup> | <sup>89.1% </sup> |<sup> 24.0 pass@1 </sup> | <sup>Non-commercial</sup>|
| <sup>WizardLM-7B-V1.0 </sup>| <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-7B-V1.0" target="_blank">HF Link</a> </sup> |<sup> 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> </sup>| | | <sup>78.0% </sup> |<sup>19.1 pass@1 </sup>|<sup> Non-commercial</sup>|
| <sup>WizardCoder-15B-V1.0</sup> | <sup> 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-15B-V1.0" target="_blank">HF Link</a></sup> | <sup>📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a></sup> | || |<sup> 57.3 pass@1 </sup> | <sup> <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a></sup> |
</font>
**Repository**: https://github.com/nlpxucan/WizardLM
**Twitter**:
- 🔥🔥🔥 [7/25/2023] We released **WizardLM V1.2** models. The **WizardLM-13B-V1.2** is here ([Demo_13B-V1.2](https://b7a19878988c8c73.gradio.app), [Demo_13B-V1.2_bak-1](https://d0a37a76e0ac4b52.gradio.app/), [Full Model Weight](https://huggingface.co/WizardLM/WizardLM-13B-V1.2)). Please checkout the [paper](https://arxiv.org/abs/2304.12244).
- 🔥🔥🔥 [7/25/2023] The **WizardLM-13B-V1.2** achieves **7.06** on [MT-Bench Leaderboard](https://chat.lmsys.org/?leaderboard), **89.17%** on [AlpacaEval Leaderboard](https://tatsu-lab.github.io/alpaca_eval/), and **101.4%** on [WizardLM Eval](https://github.com/nlpxucan/WizardLM/blob/main/WizardLM/data/WizardLM_testset.jsonl). (Note: MT-Bench and AlpacaEval are all self-test, will push update and request review. All tests are completed under their official settings.)
❗<b>Note for model system prompts usage:</b>
<b>WizardLM</b> adopts the prompt format from <b>Vicuna</b> and supports **multi-turn** conversation. The prompt should be as following:
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
USER: Hi
ASSISTANT: Hello.
USER: Who are you?
ASSISTANT: I am WizardLM.
......
```
❗<b>To commen concern about dataset:</b>
Recently, there have been clear changes in the open-source policy and regulations of our overall organization's code, data, and models.
Despite this, we have still worked hard to obtain opening the weights of the model first, but the data involves stricter auditing and is in review with our legal team .
Our researchers have no authority to publicly release them without authorization.
Thank you for your understanding. |
Hwilner/q-FrozenLake-v1-4x4-noSlippery | Hwilner | 2023-08-12T15:47:43Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-08-12T15:47:40Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Hwilner/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
fengtc/chatglm-6b-int4 | fengtc | 2023-08-12T15:47:36Z | 136 | 0 | transformers | [
"transformers",
"pytorch",
"chatglm",
"glm",
"thudm",
"custom_code",
"zh",
"en",
"endpoints_compatible",
"region:us"
]
| null | 2023-08-12T15:36:29Z | ---
language:
- zh
- en
tags:
- glm
- chatglm
- thudm
---
# ChatGLM-6B-INT4
<p align="center">
👋 Join our <a href="https://join.slack.com/t/chatglm/shared_invite/zt-1udqapmrr-ocT1DS_mxWe6dDY8ahRWzg" target="_blank">Slack</a> and <a href="https://github.com/THUDM/ChatGLM-6B/blob/main/resources/WECHAT.md" target="_blank">WeChat</a>
</p>
## 介绍
ChatGLM-6B 是一个开源的、支持中英双语问答的对话语言模型,基于 [General Language Model (GLM)](https://github.com/THUDM/GLM) 架构,具有 62 亿参数。结合模型量化技术,用户可以在消费级的显卡上进行本地部署(INT4 量化级别下最低只需 6GB 显存)。ChatGLM-6B 使用了和 [ChatGLM](https://chatglm.cn) 相同的技术,针对中文问答和对话进行了优化。经过约 1T 标识符的中英双语训练,辅以监督微调、反馈自助、人类反馈强化学习等技术的加持,62 亿参数的 ChatGLM-6B 已经能生成相当符合人类偏好的回答。
ChatGLM-6B-INT4 是 ChatGLM-6B 量化后的模型权重。具体的,ChatGLM-6B-INT4 对 ChatGLM-6B 中的 28 个 GLM Block 进行了 INT4 量化,没有对 Embedding 和 LM Head 进行量化。量化后的模型理论上 6G 显存(使用 CPU 即内存)即可推理,具有在嵌入式设备(如树莓派)上运行的可能。
在 CPU 上运行时,会根据硬件自动编译 CPU Kernel ,请确保已安装 GCC 和 OpenMP (Linux一般已安装,对于Windows则需手动安装),以获得最佳并行计算能力。
## 软件依赖
```shell
pip install protobuf transformers==4.27.1 cpm_kernels
```
## 代码调用
可以通过如下代码调用 ChatGLM-6B 模型来生成对话:
```ipython
>>> from transformers import AutoTokenizer, AutoModel
>>> tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
>>> model = AutoModel.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True).half().cuda()
>>> response, history = model.chat(tokenizer, "你好", history=[])
>>> print(response)
你好👋!我是人工智能助手 ChatGLM-6B,很高兴见到你,欢迎问我任何问题。
>>> response, history = model.chat(tokenizer, "晚上睡不着应该怎么办", history=history)
>>> print(response)
晚上睡不着可能会让你感到焦虑或不舒服,但以下是一些可以帮助你入睡的方法:
1. 制定规律的睡眠时间表:保持规律的睡眠时间表可以帮助你建立健康的睡眠习惯,使你更容易入睡。尽量在每天的相同时间上床,并在同一时间起床。
2. 创造一个舒适的睡眠环境:确保睡眠环境舒适,安静,黑暗且温度适宜。可以使用舒适的床上用品,并保持房间通风。
3. 放松身心:在睡前做些放松的活动,例如泡个热水澡,听些轻柔的音乐,阅读一些有趣的书籍等,有助于缓解紧张和焦虑,使你更容易入睡。
4. 避免饮用含有咖啡因的饮料:咖啡因是一种刺激性物质,会影响你的睡眠质量。尽量避免在睡前饮用含有咖啡因的饮料,例如咖啡,茶和可乐。
5. 避免在床上做与睡眠无关的事情:在床上做些与睡眠无关的事情,例如看电影,玩游戏或工作等,可能会干扰你的睡眠。
6. 尝试呼吸技巧:深呼吸是一种放松技巧,可以帮助你缓解紧张和焦虑,使你更容易入睡。试着慢慢吸气,保持几秒钟,然后缓慢呼气。
如果这些方法无法帮助你入睡,你可以考虑咨询医生或睡眠专家,寻求进一步的建议。
```
关于更多的使用说明,包括如何运行命令行和网页版本的 DEMO,以及使用模型量化以节省显存,请参考我们的 [Github Repo](https://github.com/THUDM/ChatGLM-6B)。
## 协议
本仓库的代码依照 [Apache-2.0](LICENSE) 协议开源,ChatGLM-6B 模型的权重的使用则需要遵循 [Model License](MODEL_LICENSE)。
## 引用
如果你觉得我们的工作有帮助的话,请考虑引用下列论文:
```
@inproceedings{
zeng2023glm-130b,
title={{GLM}-130B: An Open Bilingual Pre-trained Model},
author={Aohan Zeng and Xiao Liu and Zhengxiao Du and Zihan Wang and Hanyu Lai and Ming Ding and Zhuoyi Yang and Yifan Xu and Wendi Zheng and Xiao Xia and Weng Lam Tam and Zixuan Ma and Yufei Xue and Jidong Zhai and Wenguang Chen and Zhiyuan Liu and Peng Zhang and Yuxiao Dong and Jie Tang},
booktitle={The Eleventh International Conference on Learning Representations (ICLR)},
year={2023},
url={https://openreview.net/forum?id=-Aw0rrrPUF}
}
```
```
@inproceedings{du2022glm,
title={GLM: General Language Model Pretraining with Autoregressive Blank Infilling},
author={Du, Zhengxiao and Qian, Yujie and Liu, Xiao and Ding, Ming and Qiu, Jiezhong and Yang, Zhilin and Tang, Jie},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
pages={320--335},
year={2022}
}
``` |
1toTree/chatglm2-6b-law-ptuning | 1toTree | 2023-08-12T15:47:31Z | 0 | 1 | null | [
"zh",
"license:mit",
"region:us"
]
| null | 2023-08-12T15:25:20Z | ---
license: mit
language:
- zh
---
**p-tuning方法**<br>
**两万条法律问答数据**<br>
**微调的权重**<br>
**能够完成一些法律场景的问答任务** |
rdpb/lora-trained-xl-colab | rdpb | 2023-08-12T15:46:07Z | 0 | 1 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| text-to-image | 2023-08-12T01:32:37Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of rdpb
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - rdpb/lora-trained-xl-colab
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of rdpb using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
nagupv/Llama-7B_LLMExam_f2 | nagupv | 2023-08-12T15:35:08Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-08-12T15:35:06Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
Ar4ikov/civitai_prompts_falcon_15k_v2_4bit | Ar4ikov | 2023-08-12T15:16:47Z | 11 | 1 | peft | [
"peft",
"region:us"
]
| null | 2023-08-12T15:16:43Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
|
realzdlegend/autotrain-pneumonia-81787141863 | realzdlegend | 2023-08-12T15:02:32Z | 181 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"swin",
"image-classification",
"autotrain",
"vision",
"dataset:realzdlegend/autotrain-data-pneumonia",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-08-12T14:58:15Z | ---
tags:
- autotrain
- vision
- image-classification
datasets:
- realzdlegend/autotrain-data-pneumonia
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 1.7413317292123676
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 81787141863
- CO2 Emissions (in grams): 1.7413
## Validation Metrics
- Loss: 0.550
- Accuracy: 0.729
- Macro F1: 0.763
- Micro F1: 0.729
- Weighted F1: 0.726
- Macro Precision: 0.832
- Micro Precision: 0.729
- Weighted Precision: 0.785
- Macro Recall: 0.748
- Micro Recall: 0.729
- Weighted Recall: 0.729 |
wlsa/quiz | wlsa | 2023-08-12T14:57:47Z | 0 | 1 | peft | [
"peft",
"gpt_neox",
"region:us"
]
| null | 2023-08-12T10:11:31Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
Evan-Lin/Bart-abs-yelp-allure-v1 | Evan-Lin | 2023-08-12T14:49:47Z | 47 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"trl",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| reinforcement-learning | 2023-08-12T14:41:43Z | ---
license: apache-2.0
tags:
- trl
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="Evan-Lin//tmp/tmppyqluc4g/Evan-Lin/Bart-abs-yelp-allure-3")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("Evan-Lin//tmp/tmppyqluc4g/Evan-Lin/Bart-abs-yelp-allure-3")
model = AutoModelForCausalLMWithValueHead.from_pretrained("Evan-Lin//tmp/tmppyqluc4g/Evan-Lin/Bart-abs-yelp-allure-3")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
Chat-Error/BluemoonRP-L2-13B-This-time-will-be-better | Chat-Error | 2023-08-12T14:37:09Z | 2 | 2 | peft | [
"peft",
"region:us"
]
| null | 2023-08-09T14:18:13Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
nagupv/Llama-7B_LLMExam_f1 | nagupv | 2023-08-12T14:12:46Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-08-12T14:12:44Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
papepipopu/ppo-PyramidsRND-course | papepipopu | 2023-08-12T14:02:07Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
]
| reinforcement-learning | 2023-08-12T14:01:03Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: papepipopu/ppo-PyramidsRND-course
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ivantkrs/s | ivantkrs | 2023-08-12T13:48:21Z | 0 | 0 | null | [
"arxiv:1910.09700",
"region:us"
]
| null | 2023-08-12T13:47:59Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
FelixChao/llama2-13b-ft-adapters-math1.1 | FelixChao | 2023-08-12T13:47:49Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-08-12T13:47:22Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_5_e8_s6789_v3_l6_v100 | KingKazma | 2023-08-12T13:29:08Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-08-12T13:29:07Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
Ahmed-Taha/xlm-roberta-base-finetuned-panx-de | Ahmed-Taha | 2023-08-12T13:25:23Z | 134 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-08-12T08:59:29Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: validation
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.4358974358974359
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6996
- F1: 0.4359
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 50 | 0.9789 | 0.3077 |
| No log | 2.0 | 100 | 0.6996 | 0.4359 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.0.0
- Tokenizers 0.13.3
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_5_e7_s6789_v3_l6_v100 | KingKazma | 2023-08-12T13:22:20Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-08-12T13:22:19Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_5_e6_s6789_v3_l6_v100 | KingKazma | 2023-08-12T13:15:32Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-08-12T13:15:30Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
stephanebhiri/lora-trained-xl-colab-stp21 | stephanebhiri | 2023-08-12T12:58:06Z | 0 | 1 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| text-to-image | 2023-08-12T12:37:18Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: sks male singer
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - stephanebhiri/lora-trained-xl-colab-stp21
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on sks male singer using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
wandabwa2004/falcon-7b-safcom | wandabwa2004 | 2023-08-12T12:56:57Z | 2 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-08-06T10:07:45Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_5_e2_s6789_v3_l6_v100 | KingKazma | 2023-08-12T12:48:16Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-08-12T12:48:15Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
nkpz/llama2-22b-daydreamer-v2 | nkpz | 2023-08-12T12:45:40Z | 17 | 2 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-08-12T10:06:07Z | ---
license: other
---
Continuation of https://huggingface.co/nkpz/llama2-22b-daydreamer-v1
Trained it a couple more epochs on airoboros v2 general data.
This resulted in improvements to its obsession with overexplaining itself - it's a little less instruction focused and feels more versatile. Still needs work overall, but I think it's moving in the right direction.
Still more to come. Going to try to find a way to improve its problem solving skills without ruining the output style. |
rahuketu86/Model-PandemicSafety | rahuketu86 | 2023-08-12T12:45:37Z | 0 | 0 | fastai | [
"fastai",
"region:us"
]
| null | 2023-08-10T09:03:57Z | ---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
geoka/ppo-LunarLander-v2_second | geoka | 2023-08-12T12:45:31Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-08-12T12:45:11Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 143.77 +/- 162.67
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_5_e1_s6789_v3_l6_v100 | KingKazma | 2023-08-12T12:41:27Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-08-12T12:41:25Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_5_e0_s6789_v3_l6_v100 | KingKazma | 2023-08-12T12:34:38Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-08-12T12:34:36Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
Asilkan/mycustom_summarization_model | Asilkan | 2023-08-12T12:34:28Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-08-12T12:29:18Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: mycustom_summarization_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1386
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mycustom_summarization_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5992
- Rouge1: 0.1386
- Rouge2: 0.0475
- Rougel: 0.1129
- Rougelsum: 0.1129
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.8842 | 0.1287 | 0.0356 | 0.1075 | 0.1078 | 19.0 |
| No log | 2.0 | 124 | 2.6762 | 0.1303 | 0.0427 | 0.1086 | 0.1086 | 19.0 |
| No log | 3.0 | 186 | 2.6165 | 0.1352 | 0.046 | 0.1112 | 0.111 | 19.0 |
| No log | 4.0 | 248 | 2.5992 | 0.1386 | 0.0475 | 0.1129 | 0.1129 | 19.0 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_5_e-1_s6789_v3_l6_v100 | KingKazma | 2023-08-12T12:27:48Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-08-12T12:27:47Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
helamri/a2c-PandaReachDense-v3 | helamri | 2023-08-12T12:21:09Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-08-12T12:08:36Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.18 +/- 0.10
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
bladevb/distilhubert-finetuned-gtzan | bladevb | 2023-08-12T12:19:28Z | 159 | 0 | transformers | [
"transformers",
"pytorch",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| audio-classification | 2023-08-12T08:37:44Z | ---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.84
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5805
- Accuracy: 0.84
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0028 | 1.0 | 113 | 1.8044 | 0.51 |
| 1.354 | 2.0 | 226 | 1.1991 | 0.65 |
| 1.0236 | 3.0 | 339 | 0.9521 | 0.74 |
| 0.8043 | 4.0 | 452 | 0.8474 | 0.74 |
| 0.6249 | 5.0 | 565 | 0.7328 | 0.78 |
| 0.4027 | 6.0 | 678 | 0.6832 | 0.81 |
| 0.447 | 7.0 | 791 | 0.5989 | 0.84 |
| 0.172 | 8.0 | 904 | 0.5600 | 0.86 |
| 0.3328 | 9.0 | 1017 | 0.5495 | 0.85 |
| 0.1516 | 10.0 | 1130 | 0.5805 | 0.84 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_5_e9_s6789_v3_l6_v100 | KingKazma | 2023-08-12T12:19:19Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-08-12T12:19:18Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
stephanebhiri/lora-trained-xl-colab-stp20 | stephanebhiri | 2023-08-12T12:18:21Z | 0 | 1 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| text-to-image | 2023-08-12T11:56:42Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: olistephane
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - stephanebhiri/lora-trained-xl-colab-stp20
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on olistephane using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
TimKoornstra/SAURON | TimKoornstra | 2023-08-12T12:12:26Z | 6 | 3 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"style",
"representation",
"en",
"license:gpl-3.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2023-06-29T15:12:57Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- style
- representation
license: gpl-3.0
language:
- en
---
# SAURON: Stylistic AUthorship RepresentatiON Model
## Overview
SAURON is a sentence-transformers model designed to represent the unique stylistic nuances of authorship. By mapping sentences and paragraphs into a 768-dimensional dense vector space, SAURON can be employed for tasks such as clustering or stylistic search. This model was developed as part of a master's thesis in Artificial Intelligence, and it leverages semantically similar utterances to enhance writing style embedding models.
## Key Features
- **Semantically Similar Utterances**: SAURON uses pairs of utterances that convey the same meaning but are expressed differently in style. This approach helps the model focus more on the stylistic aspects rather than the content.
- **Diverse Training Data**: The model was trained on a vast range of conversations from Reddit, ensuring a broad representation of both authorship and topics.
- **Performance Evaluation**: The STyle EvaLuation (STEL) framework was employed to gauge the model's efficacy in capturing writing styles.
- **Content Control**: The introduction of semantically similar utterances greatly enhanced performance, offering better control over content.
## Applications
- **Stylistic Search**: Search for content based on its writing style rather than its subject matter.
- **Clustering**: Group text based on the stylistic similarities of the authors.
- **Style-Content Disentanglement**: Enhance models and applications that require distinguishing between style and content.
## Research Insights
1. While semantically similar utterances significantly improved performance, the most efficient approach combines this technique with conversation-based sampling.
2. Strategies such as maintaining diversity in authorship and topics proved effective for data preparation.
3. The SAURON model considerably outperformed its predecessors, marking a significant step forward in style-content disentanglement tasks.
## More Information
For a comprehensive overview, including the complete thesis and training setup details, visit the [SAURON GitHub repository](https://github.com/TimKoornstra/SAURON).
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('TimKoornstra/SAURON')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('TimKoornstra/SAURON')
model = AutoModel.from_pretrained('TimKoornstra/SAURON')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 137066 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.COSINE', 'triplet_margin': 0.5}
```
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.TripletEvaluator.TripletEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 54826,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 514, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
If you use this project in your research, please cite this repository and the associated master's thesis. The BibTeX entry for the thesis is:
```bibtex
@mastersthesis{Koornstra2023,
author = {Tim Koornstra},
title = {SAURON: Leveraging Semantically Similar Utterances to Enhance Writing Style Embedding Models},
school = {Utrecht University},
year = {2023},
address = {Utrecht, The Netherlands},
month = {June},
note = {Available at: \url{https://github.com/TimKoornstra/SAURON}}
}
``` |
Trelis/Llama-2-7b-chat-hf-function-calling-GPTQ | Trelis | 2023-08-12T12:10:09Z | 18 | 4 | transformers | [
"transformers",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-2",
"functions",
"function calling",
"sharded",
"ggml",
"gptq",
"en",
"arxiv:2307.09288",
"autotrain_compatible",
"region:us"
]
| text-generation | 2023-08-10T15:09:13Z | ---
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
- functions
- function calling
- sharded
- ggml
- gptq
---
# fLlama 2 - Function Calling Llama 2
- fLlama 2 extends the hugging face Llama 2 models with function calling capabilities.
- The model responds with a structured json argument with the function name and arguments
Available models:
- fLlama-7B ([bitsandbytes NF4](https://huggingface.co/Trelis/Llama-2-7b-chat-hf-function-calling)), ([GGML](https://huggingface.co/Trelis/Llama-2-7b-chat-hf-function-calling-GGML)), ([GPTQ](https://huggingface.co/Trelis/Llama-2-7b-chat-hf-function-calling-GPTQ)) - free
- fLlama-13B ([bitsandbytes NF4](https://huggingface.co/Trelis/Llama-2-13b-chat-hf-function-calling)), ([GPTQ](https://huggingface.co/Trelis/Llama-2-13b-chat-hf-function-calling-GPTQ)) - paid
## Inference with Google Colab and HuggingFace 🤗
**GPTQ (fastest + good accuracy)**
Get started by saving your own copy of this [function calling chatbot](https://colab.research.google.com/drive/1u8x41Jx8WWtI-nzHOgqTxkS3Q_lcjaSX?usp=sharing).
You will be able to run inference using a free Colab notebook if you select a gpu runtime. See the notebook for more details.
**Bits and Bytes NF4 (slowest inference)**
Try out this notebook [fLlama_Inference notebook](https://colab.research.google.com/drive/1Ow5cQ0JNv-vXsT-apCceH6Na3b4L7JyW?usp=sharing)
**GGML (best for running on a laptop, great for Mac)**
To run this you'll need to install llamaccp from ggerganov on github.
- Download the ggml file from the ggml link above, under available models
- I recommend running a command like:
```
./server -m fLlama-2-7b-chat.ggmlv3.q3_K_M.bin -ngl 32 -c 2048
```
which will allow you to run a chatbot in your browser. The -ngl offloads layers to the Mac's GPU and gets very good token generation speed.
## Licensing and Usage
fLlama-7B:
- Llama 2 license
fLlama-13B:
- For higher precision on function calling.
- Purchase acess here: [fLlama-13b: €19.99 per user/seat.](https://buy.stripe.com/9AQ7te3lHdmbdZ68wz)
- Licenses are not transferable to other users/entities.
- Commercial licenses for larger models are available on request - email ronan [at] trelis [dot] com
- Use of fLlama models is further subject to terms in the [Meta license](https://ai.meta.com/resources/models-and-libraries/llama-downloads/).
### Dataset
The dataset used for training this model can be found at [Trelis Function Calling Extended Dataset](https://huggingface.co/datasets/Trelis/function_calling_extended).
## Prompt and Response Format
To make a function call, you should format your input like this:
```
<s>[INST] <<SYS>>
You are a helpful research assistant. The following functions are available for you to fetch further data to answer user questions, if relevant:
{
"function": "search_bing",
"description": "Search the web for content on Bing. This allows users to search online/the internet/the web for content.",
"arguments": [
{
"name": "query",
"type": "string",
"description": "The search query string"
}
]
}
{
"function": "search_arxiv",
"description": "Search for research papers on ArXiv. Make use of AND, OR and NOT operators as appropriate to join terms within the query.",
"arguments": [
{
"name": "query",
"type": "string",
"description": "The search query string"
}
]
}
To call a function, respond - immediately and only - with a JSON object of the following format:
{
"function": "function_name",
"arguments": {
"argument1": "argument_value",
"argument2": "argument_value"
}
}
<</SYS>>
Find papers on high pressure batch reverse osmosis [/INST]
```
Notice that functionMetadata should be a string representation of a JSON object, like this:
```
"functionMetaData": {
"function": "search_bing",
"description": "Search the web for content on Bing. This allows users to search online/the internet/the web for content.",
"arguments": [
{
"name": "query",
"type": "string",
"description": "The search query string"
}
]
}
'''
```
and the language model should respond with a json object formatted like this:
```
{
"function": "function_name",
"arguments": {
"argument1": "argument_value",
"argument2": "argument_value"
}
}
```
It is recommended to handle cases where:
- There is no json object in the response
- The response contains text in addition to the json response
## Quanitization Configurations
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
~
Below follows information on the original Llama 2 model...
~
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)| |
epikziom34/Dawid-Jasper-RVC-v2 | epikziom34 | 2023-08-12T11:57:05Z | 0 | 0 | null | [
"region:us"
]
| null | 2023-08-12T11:52:27Z | Dawid Jasper - RVC v2 - 300 Epochs |
KingKazma/xsum_gpt2_p_tuning_500_10_3000_5_e6_s6789_v3_l6_v100 | KingKazma | 2023-08-12T11:53:00Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-08-12T11:52:59Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
openbmb/VisCPM-Chat | openbmb | 2023-08-12T11:45:22Z | 11 | 21 | transformers | [
"transformers",
"pytorch",
"viscpmchatbee",
"feature-extraction",
"custom_code",
"zh",
"en",
"region:us"
]
| feature-extraction | 2023-06-28T02:18:29Z | ---
language:
- zh
- en
---
# VisCPM
简体中文 | [English](README_en.md)
<p align="center">
<p align="left">
<a href="./LICENSE"><img src="https://img.shields.io/badge/license-Apache%202-dfd.svg"></a>
<a href=""><img src="https://img.shields.io/badge/python-3.8+-aff.svg"></a>
</p>
`VisCPM` is a family of open-source large multimodal models, which support multimodal conversational capabilities (`VisCPM-Chat` model) and text-to-image generation capabilities (`VisCPM-Paint` model) in both Chinese and English, achieving state-of-the-art peformance among Chinese open-source multimodal models. `VisCPM` is trained based on the large language model [CPM-Bee](https://github.com/OpenBMB/CPM-Bee) with 10B parameters, fusing visual encoder (Q-Former) and visual decoder (Diffusion-UNet) to support visual inputs and outputs. Thanks to the good bilingual capability of CPM-Bee, `VisCPM` can be pre-trained with English multimodal data only and well generalize to achieve promising Chinese multimodal capabilities.
`VisCPM`是一个开源的多模态大模型系列,支持中英双语的多模态对话能力(`VisCPM-Chat`模型)和文到图生成能力(`VisCPM-Paint`模型),在中文多模态开源模型中达到最佳水平。`VisCPM`基于百亿参数量语言大模型[CPM-Bee](https://github.com/OpenBMB/CPM-Bee)(10B)训练,融合视觉编码器(`Q-Former`)和视觉解码器(`Diffusion-UNet`)以支持视觉信号的输入和输出。得益于`CPM-Bee`底座优秀的双语能力,`VisCPM`可以仅通过英文多模态数据预训练,泛化实现优秀的中文多模态能力。
## VisCPM-Chat
`VisCPM-Chat`支持面向图像进行中英双语多模态对话。该模型使用`Q-Former`作为视觉编码器,使用CPM-Bee(10B)作为语言交互基底模型,并通过语言建模训练目标融合视觉和语言模型。模型训练包括预训练和指令精调两阶段:
* 预训练:我们使用约100M高质量英文图文对数据对`VisCPM-Chat`进行了预训练,数据包括CC3M、CC12M、COCO、Visual Genome、Laion等。在预训练阶段,语言模型参数保持固定,仅更新`Q-Former`部分参数,以支持大规模视觉-语言表示的高效对齐。
* 指令精调:我们采用[LLaVA-150K](https://llava-vl.github.io/)英文指令精调数据,并混合相应翻译后的中文数据对模型进行指令精调,以对齐模型多模态基础能力和用户使用意图。在指令精调阶段,我们更新全部模型参数,以提升指令精调数据的利用效率。有趣的是,我们发现即使仅采用英文指令数据进行指令精调,模型也可以理解中文问题,但仅能用英文回答。这表明模型的多语言多模态能力已经得到良好的泛化。在指令精调阶段进一步加入少量中文翻译数据,可以将模型回复语言和用户问题语言对齐。
我们在LLaVA英文测试集和翻译的中文测试集对模型进行了评测,该评测基准考察模型在开放域对话、图像细节描述、复杂推理方面的表现,并使用GPT-4进行打分。可以观察到,`VisCPM-Chat`在中文多模态能力方面取得了最佳的平均性能,在通用域对话和复杂推理表现出色,同时也表现出了不错的英文多模态能力。
<table>
<tr>
<td align="center" rowspan="2" colspan="2">模型</td>
<td align="center" colspan="4">英文</td>
<td align="center" colspan="4">中文</td>
</tr>
<tr>
<td align="center">多模态对话</td>
<td align="center">细节描述</td>
<td align="center">复杂推理</td>
<td align="center">平均</td>
<td align="center">多模态对话</td>
<td align="center">细节描述</td>
<td align="center">复杂推理</td>
<td align="center">平均</td>
</tr>
<tr>
<td align="center" rowspan="3">英文模型</td>
<td align="center">MiniGPT4</td>
<td align="center">65</td>
<td align="center">67.3</td>
<td align="center">76.6</td>
<td align="center">69.7</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">-</td>
</tr>
<tr>
<td align="center">InstructBLIP</td>
<td align="center">81.9</td>
<td align="center">68</td>
<td align="center">91.2</td>
<td align="center">80.5</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">-</td>
</tr>
<tr>
<td align="center">LLaVA</td>
<td align="center">89.5</td>
<td align="center">70.4</td>
<td align="center">96.2</td>
<td align="center">85.6</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">-</td>
</tr>
<tr>
<td align="center" rowspan="4">中英双语</td>
<td align="center">mPLUG-Owl </td>
<td align="center">64.6</td>
<td align="center">47.7</td>
<td align="center">80.1</td>
<td align="center">64.2</td>
<td align="center">76.3</td>
<td align="center">61.2</td>
<td align="center">77.8</td>
<td align="center">72</td>
</tr>
<tr>
<td align="center">VisualGLM</td>
<td align="center">62.4</td>
<td align="center">63</td>
<td align="center">80.6</td>
<td align="center">68.7</td>
<td align="center">76.6</td>
<td align="center">87.8</td>
<td align="center">83.6</td>
<td align="center">82.7</td>
</tr>
<tr>
<td align="center">Ziya (LLaMA 13B)</td>
<td align="center">82.7</td>
<td align="center">69.9</td>
<td align="center">92.1</td>
<td align="center">81.7</td>
<td align="center">85</td>
<td align="center">74.7</td>
<td align="center">82.4</td>
<td align="center">80.8</td>
</tr>
<tr>
<td align="center">VisCPM-Chat</td>
<td align="center">83.3</td>
<td align="center">68.9</td>
<td align="center">90.5</td>
<td align="center">81.1</td>
<td align="center">92.7</td>
<td align="center">76.1</td>
<td align="center">89.2</td>
<td align="center">86.3</td>
</tr>
</table>
## VisCPM-Paint
`VisCPM-Paint`支持中英双语的文到图生成。该模型使用CPM-Bee(10B)作为文本编码器,使用`UNet`作为图像解码器,并通过扩散模型训练目标融合语言和视觉模型。在训练过程中,语言模型参数始终保持固定。我们使用[Stable Diffusion 2.1](https://github.com/Stability-AI/stablediffusion)的UNet参数初始化视觉解码器,并通过逐步解冻其中关键的桥接参数将其与语言模型融合:首先训练文本表示映射到视觉模型的线性层,然后进一步解冻`UNet`的交叉注意力层。该模型在[LAION 2B](https://laion.ai/)英文图文对数据上进行了训练。
与`VisCPM-Chat`类似,我们发现得益于CPM-Bee的双语能力,`VisCPM-Paint`可以仅通过英文图文对训练,泛化实现良好的中文文到图生成能力,达到中文开源模型的最佳效果。通过进一步加入20M清洗后的原生中文图文对数据,以及120M翻译到中文的图文对数据,模型的中文文到图生成能力可以获得进一步提升。我们在MSCOCO上采样了3万张图片,计算了FID(Fréchet Inception Distance)和Clip Score,前者用于评估生成图片的质量,后面用于评估生成的图片与输入的匹配程度。
<table>
<tr>
<td align="center" rowspan="2">模型</td>
<td align="center" colspan="2">英文</td>
<td align="center" colspan="2">中文</td>
</tr>
<tr>
<td align="center">FID↓</td>
<td align="center">CLIP Score↑</td>
<td align="center">FID↓</td>
<td align="center">CLIP Score↑</td>
</tr>
<tr>
<td align="center">AltDiffusion</td>
<td align="center">17.16</td>
<td align="center">25.24</td>
<td align="center">16.09</td>
<td align="center">24.05</td>
</tr>
<tr>
<td align="center">TaiyiDiffusion</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">15.58</td>
<td align="center">22.69</td>
</tr>
<tr>
<td align="center">Stable Diffusion</td>
<td align="center">9.08</td>
<td align="center">26.22</td>
<td align="center">-</td>
<td align="center">-</td>
</tr>
<tr>
<td align="center">VisCPM-Paint-en</td>
<td align="center">9.51</td>
<td align="center">25.35</td>
<td align="center">10.86</td>
<td align="center">23.38</td>
</tr>
<tr>
<td align="center">VisCPM-Paint-zh</td>
<td align="center">9.98</td>
<td align="center">25.04</td>
<td align="center">9.65</td>
<td align="center">24.17</td>
</tr>
</table>
# 安装
```Shell
conda create -n viscpm python=3.10 -y
conda activate viscpm
pip install setuptools
pip install diffusers jieba matplotlib numpy opencv_python
pip install pandas Pillow psutil pydantic scipy
pip install torch==1.13.1 torchscale==0.2.0 torchvision==0.14.1 timm
pip install transformers==4.28.0
pip install tqdm typing_extensions
pip install git+https://github.com/thunlp/OpenDelta.git
pip install git+https://github.com/OpenBMB/CPM-Bee.git#egg=cpm-live&subdirectory=src
```
VisCPM需要单卡40GB以上的GPU运行,我们会在尽快更新更加节省显存的推理方式。
## 使用
```python
>>> from transformers import AutoModel, AutoTokenizer, AutoImageProcessor
>>> from PIL import Image
>>> tokenizer = AutoTokenizer.from_pretrained('openbmb/VisCPM-Chat', trust_remote_code=True)
>>> processor = AutoImageProcessor.from_pretrained('openbmb/VisCPM-Chat', trust_remote_code=True)
>>> model = AutoModel.from_pretrained('openbmb/VisCPM-Chat', trust_remote_code=True).to('cuda')
>>> data = [{
>>> 'context': '',
>>> 'question': 'describe this image in detail.',
>>> 'image': tokenizer.unk_token * model.query_num,
>>> '<ans>': ''
>>> }]
>>> image = Image.open('case.jpg')
>>> result = model.generate(data, tokenizer, processor, image)
>>> print(result[0]['<ans>'])
这幅图片显示了一群热气球在天空中飞行。这些热气球漂浮在不同的地方,包括山脉、城市和乡村地区。
``` |
KingKazma/xsum_gpt2_p_tuning_500_10_3000_5_e5_s6789_v3_l6_v100 | KingKazma | 2023-08-12T11:44:13Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-08-12T11:44:12Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_5_e3_s6789_v3_l6_v100 | KingKazma | 2023-08-12T11:26:37Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-08-12T11:26:37Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
rcarrata/rcarrata-finetuning-sentiment-model-3000-samples | rcarrata | 2023-08-12T11:24:01Z | 114 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-08-12T11:17:20Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: rcarrata-finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8733333333333333
- name: F1
type: f1
value: 0.8741721854304636
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rcarrata-finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3195
- Accuracy: 0.8733
- F1: 0.8742
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
fp16-guy/Stable-Diffusion-v1-5_fp16_cleaned | fp16-guy | 2023-08-12T11:08:56Z | 0 | 1 | null | [
"text-to-image",
"region:us"
]
| text-to-image | 2023-08-12T10:11:41Z | ---
pipeline_tag: text-to-image
---
SD 1.5 / SD 1.5 inpainting, but fp16/cleaned - smaller size, same result.
========
///
**[**original checkpoint link**](https://huggingface.co/runwayml/stable-diffusion-v1-5)**, **[**original checkpoint link (inpainting)**](https://huggingface.co/runwayml/stable-diffusion-inpainting)**
*[*grid 01*](https://huggingface.co/datasets/fp16-guy/grids/blob/main/sd%201.5%2001%2020230812133125-111-sd-v1-5-inpainting-Euler%20a-6.png) *(1.99gb version)*
*[*grid 02*](https://huggingface.co/datasets/fp16-guy/grids/blob/main/sd%201.5%20inp%2001%2020230812134521-111-v1-5-pruned-Euler%20a-5.5.png) *(1.99gb inpaint version)* |
josephamess/llama-2-7b-MultiChoiceFineTuned-rag | josephamess | 2023-08-12T10:58:05Z | 1 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-08-12T10:57:59Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_5_e-1_s6789_v3_l6_v100 | KingKazma | 2023-08-12T10:51:26Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-08-12T10:51:25Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
JaiveerGill/fine-tuned-chem-model-final-eos | JaiveerGill | 2023-08-12T10:50:37Z | 1 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-08-12T10:50:13Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
TinToTin/a2c-PandaReachDense-v3 | TinToTin | 2023-08-12T10:40:27Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-08-12T10:34:37Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.28 +/- 0.12
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
vj1148/lora-peft-holding-classification | vj1148 | 2023-08-12T10:12:20Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-08-12T07:13:36Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
Fantast/yolos-small-finetuned-for-seal | Fantast | 2023-08-12T09:49:55Z | 118 | 1 | transformers | [
"transformers",
"pytorch",
"yolos",
"object-detection",
"arxiv:2106.00666",
"endpoints_compatible",
"region:us"
]
| object-detection | 2023-08-12T09:34:46Z | ### YOLOS (small-sized) model Finetuned For Seal Detection Task
#### YOLOS model based on `hustvl/yolos-small` and fine-tuned on Our Seal Image Dataset.
#### Model description
YOLOS is a Vision Transformer (ViT) trained using the DETR loss.
#### How to use
Here is how to use this model:
```
from transformers import YolosFeatureExtractor, YolosForObjectDetection
from PIL import Image
import requests
image = Image.open("xxxxxxxxxxxxx")
feature_extractor = YolosFeatureExtractor.from_pretrained('fantast/yolos-small-finetuned-for-seal')
model = YolosForObjectDetection.from_pretrained('fantast/yolos-small-finetuned-for-seal')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
```
# model predicts bounding boxes
```
logits = outputs.logits
bboxes = outputs.pred_boxes
```
Currently, both the feature extractor and model support PyTorch.
#### Training data
The YOLOS model based on `hustvl/yolos-small` and fine-tuned on Our Own Seal Image Dataset, a dataset consisting of 118k/5k annotated images for training/validation respectively.
BibTeX entry and citation info
```
@article{DBLP:journals/corr/abs-2106-00666,
author = {Yuxin Fang and
Bencheng Liao and
Xinggang Wang and
Jiemin Fang and
Jiyang Qi and
Rui Wu and
Jianwei Niu and
Wenyu Liu},
title = {You Only Look at One Sequence: Rethinking Transformer in Vision through
Object Detection},
journal = {CoRR},
volume = {abs/2106.00666},
year = {2021},
url = {https://arxiv.org/abs/2106.00666},
eprinttype = {arXiv},
eprint = {2106.00666},
timestamp = {Fri, 29 Apr 2022 19:49:16 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-00666.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
---
license: mit
---
|
Fantast/segformer-mit-b0-finetuned-for-seal | Fantast | 2023-08-12T09:47:12Z | 33 | 0 | transformers | [
"transformers",
"pytorch",
"segformer",
"arxiv:2105.15203",
"endpoints_compatible",
"region:us"
]
| null | 2023-08-12T09:42:43Z | ### SegFormer Finetuned for Seal Segmentation
#### BaseOn https://huggingface.co/nvidia/mit-b0
#### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```
from transformers import AutoImageProcessor, SegformerForSemanticSegmentation
from PIL import Image
import requests
image_processor = AutoImageProcessor.from_pretrained("Fantast/segformer-mit-b0-finetuned-for-seal")
model = SegformerForSemanticSegmentation.from_pretrained("Fantast/segformer-mit-b0-finetuned-for-seal")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = image_processor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
list(logits.shape)
```
For more code examples, we refer to the documentation.
License
The license for this model can be found here.
BibTeX entry and citation info
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
---
license: mit
---
|
AaltoSpeech/CombFinnish-AED-CRDNN | AaltoSpeech | 2023-08-12T09:34:40Z | 4 | 0 | speechbrain | [
"speechbrain",
"automatic-speech-recognition",
"Attention",
"pytorch",
"fi",
"region:us"
]
| automatic-speech-recognition | 2023-08-12T07:52:20Z | ---
language: "fi"
thumbnail:
tags:
- automatic-speech-recognition
- Attention
- pytorch
- speechbrain
metrics:
- wer
- cer
---
# Description
Finnish Attention-based Encoder-Decoder model trained on:
- Puhelahjat (1500h colloquial Finnish donated by huge number of volunteers)
- Finnish Parliament ASR Corpus (3000h speech from the sessions of the Finnish Parliament)
The Encoder is a CRDNN (Conv+LSTM+DNN), Decoder is GRU.
# Performance expectations
This is a relatively fast and compact model (~40M parameters), performance is not state-of-the-art.
This does not include a language model, the model is fully end-to-end.
This model should generalize to many types of speech. However, the model will also try to match colloquial speech (unlike some models which have learned to follow the written forms of Finnish). In fact being able to recognise many different dialects is a goal of the Puhelahjat data. The model is not especially robust to noise.
|
BADeid/Joy_Training_Part_5 | BADeid | 2023-08-12T09:30:55Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-08-12T09:22:57Z | ---
license: creativeml-openrail-m
---
|
svensk-k/xlm-roberta-base-finetuned-panx-de-fr | svensk-k | 2023-08-12T09:30:13Z | 126 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-08-12T09:17:57Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1606
- F1: 0.8620
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2873 | 1.0 | 715 | 0.1802 | 0.8245 |
| 0.1446 | 2.0 | 1430 | 0.1601 | 0.8512 |
| 0.0925 | 3.0 | 2145 | 0.1606 | 0.8620 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
|
golaxy/gogpt2-13b | golaxy | 2023-08-12T09:19:39Z | 1,433 | 6 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"llama",
"text-generation",
"llama2",
"chinese-llama2",
"gogpt2-7b",
"zh",
"dataset:c-s-ale/alpaca-gpt4-data-zh",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-07-31T13:32:31Z | ---
license: apache-2.0
datasets:
- c-s-ale/alpaca-gpt4-data-zh
language:
- zh
tags:
- llama2
- chinese-llama2
- gogpt2-7b
---
# GoGPT2-13B: 基于Llama2-13b训练的中英文增强大模型

<p align="center">
<img alt="GitHub" src="https://img.shields.io/github/license/ymcui/Chinese-LLaMA-Alpaca.svg?color=blue&style=flat-square">
<img alt="GitHub top language" src="https://img.shields.io/github/languages/top/ymcui/Chinese-LLaMA-Alpaca">
</p>
> ICT中英文底座增强大模型:70亿参数、130亿参数
🤗Huggingface上提供了GoGPT权重,目前开放了gogpt-7b和gogpt2-7b权重
| 模型名称 | 基座模型 | 模型大小 | 下载地址 |
|-------------------------------------------------------------|-----------|------|-------------------------------------------------|
| [golaxy/gogpt-7b](https://huggingface.co/golaxy/gogpt-7b) | Llama-7b | 7B | [模型下载](https://huggingface.co/golaxy/gogpt-7b) |
| [golaxy/gogpt2-7b](https://huggingface.co/golaxy/gogpt2-7b) | Llama2-7b | 7B | [模型下载](https://huggingface.co/golaxy/gogpt2-7b) |
| [golaxy/gogpt2-7b-pretrain](https://huggingface.co/golaxy/gogpt2-7b-pretrain) | Llama2-7b | 7B | [模型下载](https://huggingface.co/golaxy/gogpt2-7b-pretrain) |
| [golaxy/gogpt2-13b-pretrain](https://huggingface.co/golaxy/gogpt2-13b-pretrain) | Llama2-13b | 13B | [模型下载](https://huggingface.co/golaxy/gogpt2-13b-pretrain) |
| [golaxy/gogpt2-13b](https://huggingface.co/golaxy/gogpt2-13b) | Llama2-13b | 13B | [模型下载](https://huggingface.co/golaxy/gogpt2-13b) |
[GoGPT-Github](https://github.com/yanqiangmiffy/GoGPT)
## 测试效果
- 怎么学习机器学习
```
好的,以下是一些学习机器学习的方法:
1. 上下线:通过实践项目来学习,例如使用开源机器学习框架(如TensorFlow、Keras、Scikit-Learn等)来实现各种机器学习任务。
2. 课程学习:选择一些在线或线下的机器学习课程,例如Coursera、Udemy、edX等,这些平台都提供了各种机器学习课程。
3. 阅读书籍:选择一些经典的机器学习书籍,例如Andrew Ng的《深度学习》、《机器学习》等,这些书籍可以帮助你深入地学习机器学习基础知识。
4. 加入社区:加入机器学习社区(如Kaggle、Github、Reddit等),与其他机器学习爱好者交流和分享经验,从而不断学习和改进。
5. 参加比赛:参加一些机器学习比赛(如Kaggle等),可以通过解决实际问题来学习并提高自己的技能。
总之,学习机器学习需要不断实践和探索,找到适合自己的学习方法。
```
- 撰写一篇交响乐音乐会评论,讨论乐团的表演和观众的整体体验。
```
交响乐音乐会是音乐爱好者和古典音乐的爱好者的梦想。上周末,我参加了一场交响乐音乐会,我的经历真的很难忘。
首先,乐团的表演非常出色。他们都是专业音乐家,他们的演奏技巧非常娴熟。他们在演奏时,展现出了音乐的真谛,让人感受到音乐的力量和魅力。同时,他们还非常出色地演绎了交响乐中的不同元素,如高音、低音、节奏、调性和变化等。他们的表演非常自然,没有过分的紧张或放松,给人一种平和的、和谐的感觉。
其次,观众的整体体验也非常不错。音乐会现场非常宽敞明亮,座椅间隔非常宽敞。观众可以自由地欣赏音乐的每一个细节,同时还可以享受到音乐的魅力。同时,音乐会现场的氛围也非常友好,观众之间相互交流,享受音乐的共品。此外,音乐会现场还设有各种美食和饮料供观众品尝,增加了观众的舒适度。
总的来说,这次交响乐音乐会是一次非常愉快的音乐体验。乐团的表演非常出色,观众的整体体验也非常不错。我真的非常期待下一场音乐会,再次领略到音乐的魅力。
```
- 为一部最近上映的科幻电影撰写一个引人入胜的影评,讨论它的情节,角色和特效。
```
《星球大战:新纪元》是一部令人兴奋的科幻电影,它延续了之前的星球大战系列的故事情节和角色,同时也带来了新的挑战和惊喜。
这部电影的情节非常引人入胜,它讲述了一群反抗军与帝国之间的斗争,反抗军试图推翻帝国的统治,而帝国则竭力维持其统治。故事情节的复杂程度和紧张程度让人惊叹,每个角色都有自己独特的动机和经历,这使得整个电影的故事更加生动。
这部电影的主要角色包括雷伊(Rey),一个年轻的女武士,以及她的导师凯诺(Kylo Ren),一名邪恶的武士。这两个角色的互动和冲突是电影中的一个重要焦点,他们的故事情节充满了神秘和悬念,使得观众期待着下一刻会发生什么。此外,电影中还有一些新角色的出现,如米斯·恩德(Maisy Endor),一个新的星球上反抗军的领导者,以及C-3PO和R2-D2等经典的机器人角色,这些角色的回归和表现也都令人兴奋。
这部电影的视觉效果也非常惊人,它使用了最新的技术和特效来营造出逼真的宇宙场景和战斗场面。电影中的星球、建筑物、机械设备和武器的视觉效果都非常逼真,而且动作场面也非常刺激。电影中的视觉效果让人感到惊奇,也使观众更加沉浸在电影中的世界。
总的来说,《星球大战:新纪元》是一部非常值得一看的科幻电影,它讲述了一个引人入胜的故事,有着精彩的角色和特效,使得整个电影令人兴奋和震撼。这部电影的成功,也预示着星球大战系列的未来。
```
- 人类平均一生眨眼多少次?请尝试解释你的答案。你的解释应该逐步带领读者理解你的推理过程
```
人类平均一生眨眼几次是5600000次左右。这个数字是由许多不同的研究结果和估算得出的。
首先,眨眼频率随年龄的增加而变化。婴儿眨眼频率比成人高,随着年龄的增长,眨眼频率逐渐下降。根据这个规律,我们可以计算出成人眨眼频率约为10次/分钟。
其次,眨眼频率还受到多种因素的影响。例如,当人们感到紧张或疲劳时,眨眼频率会增加。同样地,当人们感到愉悦或放松时,眨眼频率会降低。此外,不同的身体部位的眨眼频率也不同。例如,眼睛内侧的眨眼频率比内侧的眨眼频率高。
最后,由于人们的眼睛健康状况也会影响眨眼频率,例如眼底病变或视力问题。这些因素都可以导致眨眼频率的改变。
综上所述,人类平均一生眨眼几次约为5600000次左右。这个数字是由多个因素影响的结果,包括年龄、身体状况、情感状态等。
```
- 请用python编程语言实现快速排序算法的代码
以下是使用Python快速排序算法的示例代码:
```python
def quick_sort(arr):
if len(arr) <= 1:
return arr
else:
pivot = arr[0]
left = []
right = []
for i in range(1, len(arr)):
if arr[i] < pivot:
left.append(arr[i])
else:
right.append(arr[i])
return quick_sort(left) + [pivot] + quick_sort(right)
```
这个函数接受一个整数数组作为输入,并在最坏情况下的时间复杂度为O(n^2)的情况下,最佳情况下时间复杂度为O(nlogn)。
- 如何提高我的时间管理技能?
```
以下是一些提高时间管理技能的建议:
1. 制定计划:每天早上或每天开始时,制定一个详细的计划,包括每天需要完成的任务和目标。
2. 优先级排序:将任务按照优先级排序,先完成重要任务。
3. 集中注意力:集中注意力,将注意力集中在当前的任务上,避免分心。
4. 避免拖延:尽可能避免拖延,尽快完成任务,避免任务过期。
5. 避免多任务:多任务会增加任务的时间,所以最好只专注一个任务,直到完成它。
6. 休息:休息是恢复和提高生产力的关键,因此定期休息,放松身心。
7. 调整时间:如果发现自己的时间管理有问题,可以调整时间,例如延长休息时间或缩短任务时间。
8. 学习新技能:学习新的技能可以帮助提高工作效率,例如使用工具或软件来管理任务。
9. 寻求帮助:如果你发现自己难以管理时间,可以寻求家人、朋友或同事的帮助。
希望这些建议可以帮助您提高时间管理技能。
```
- Python和JavaScript编程语言的主要区别是什么?
```
Python和JavaScript是两个不同的编程语言,它们有以下主要区别:
1. 数据类型:Python是一种动态类型的编程语言,它无法指定数据类型,而是根据变量中使用的值自动确定数据类型。JavaScript是一种动态类型编程语言,它使用字面量来指定变量的数据类型。
2. 语法:Python和JavaScript的语法有一些相似之处,但也有一些不同之处。例如,Python使用缩进来定义代码块,而JavaScript使用花括号。
3. 执行环境:Python通常运行在本地计算机的Python解释器中运行,而JavaScript通常运行在浏览器中运行。这意味着Python通常用于开发桌面应用程序和Web服务器端应用程序,而JavaScript主要用于开发Web应用程序。
4. 编程语言特性:Python具有更高的可读性和可维护性,而JavaScript具有更好的可扩展性和可维护性。
总之,Python和JavaScript是两种不同的编程语言,具有不同的特点和应用场景。选择哪种编程语言取决于项目的具体需求和开发者的技能水平。
```
- 植物蛋白和动物蛋白的主要区别是什么?
```
植物蛋白和动物蛋白的主要区别在于来源和氨基酸成分。
来源:
植物蛋白主要来自植物组织,如豆类、种子、果实、叶子等。而动物蛋白主要来自动物的身体部位,如肌肉、内脏、鱼类、蛋类等。
氨基酸成分:
植物蛋白中的主要氨基酸是植物性氨基酸,如氨基酸,植物性必需氨基酸(如硫胺素和植酸)等。而动物蛋白中的主要氨基酸是动物性氨基酸,如肌肉蛋白、乳糖蛋白等。
因此,植物蛋白和动物蛋白的营养价值是不同的。植物蛋白富含植物性必需氨基酸,如硫胺素、植酸等,而动物蛋白则含有更多的动物性必需氨基酸,如肌肉蛋白、乳糖蛋白等。植物蛋白通常比动物蛋白低热量,但也有些植物蛋白的蛋白质质量较低。动物蛋白通常比植物蛋白高蛋白质质量,但同样含有较多的饱和脂肪和胆固醇等有害物质。
```
- 影响消费者行为的主要因素是什么?
```
影响消费者行为的因素有很多,以下是其中几个主要的因素:
1. 消费需求:消费者的需求是影响消费者行为的主要因素之一。消费者的需求是根据个人利益和价值观而产生的。例如,如果一个消费者喜欢时尚的服装,那么他们可能会对购买这些服装的商场和品牌的选择更有吸引力。
2. 购买力:消费者购买力的限制是一个影响消费者行为的因素。消费者可能会因为预算限制而选择购买某些商品。
3. 价格:价格也是影响消费者行为的因素之一。消费者通常会比较不同商品的价格,并根据价格选择购买哪种商品。
4. 品牌知名度:品牌知名度是影响消费者行为的因素之一。消费者通常会选择有信誉的品牌,因为他们相信这些品牌能够提供高质量的产品和服务。
5. 社交媒体:社交媒体在影响消费者行为中扮演越来越重要的作用。许多消费者会在社交媒体上了解商品和品牌的信息,并选择购买这些商品。
6. 社交媒体:社交媒体在影响消费者行为中扮演越来越重要的角色。许多消费者会在社交媒体上了解商品和品牌的信息,并选择购买这些商品。
总之,影响消费者行为的主要因素有很多,这些因素往往是相互关联的。企业可以通过了解这些因素,来制定有效的营销策略,吸引和留住消费者。
```
|
csukuangfj/sherpa-onnx-tdnn-yesno | csukuangfj | 2023-08-12T09:02:11Z | 0 | 0 | null | [
"onnx",
"license:apache-2.0",
"region:us"
]
| null | 2023-08-12T07:20:49Z | ---
license: apache-2.0
---
# Introduction
This repo contains onnx models for the [tdnn][tdnn] recipe of the [yesno][yesno] dataset
from [icefall][icefall].
Please see [sherpa-onnx][sherpa-onnx] for usages.
[yesno]: https://www.openslr.org/1/
[icefall]: https://github.com/k2-fsa/icefall
[sherpa-onnx]: https://github.com/k2-fsa/sherpa-onnx
[tdnn]: https://github.com/k2-fsa/icefall/tree/master/egs/yesno/ASR/tdnn
|
Michelvh/flan-t5-mc-question-generation | Michelvh | 2023-08-12T08:58:28Z | 102 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-small",
"base_model:finetune:google/flan-t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-08-11T12:55:30Z | ---
license: apache-2.0
base_model: google/flan-t5-small
tags:
- generated_from_trainer
model-index:
- name: flan-t5-mc-question-generation
results: []
inference:
parameters:
max_length: 512
num_beams: 4
length_penalty: 1.5
no_repeat_ngram_size: 3
early_stopping: True
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-mc-question-generation
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2509
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9047 | 0.25 | 100 | 1.4246 |
| 1.5894 | 0.51 | 200 | 1.3711 |
| 1.5355 | 0.76 | 300 | 1.3450 |
| 1.5041 | 1.02 | 400 | 1.3255 |
| 1.4858 | 1.27 | 500 | 1.3134 |
| 1.4711 | 1.53 | 600 | 1.3038 |
| 1.4576 | 1.78 | 700 | 1.2951 |
| 1.4466 | 2.04 | 800 | 1.2888 |
| 1.4405 | 2.29 | 900 | 1.2836 |
| 1.4284 | 2.55 | 1000 | 1.2794 |
| 1.4228 | 2.8 | 1100 | 1.2758 |
| 1.4234 | 3.06 | 1200 | 1.2719 |
| 1.4104 | 3.31 | 1300 | 1.2690 |
| 1.4147 | 3.56 | 1400 | 1.2666 |
| 1.41 | 3.82 | 1500 | 1.2637 |
| 1.3996 | 4.07 | 1600 | 1.2622 |
| 1.4015 | 4.33 | 1700 | 1.2600 |
| 1.3958 | 4.58 | 1800 | 1.2583 |
| 1.395 | 4.84 | 1900 | 1.2566 |
| 1.3899 | 5.09 | 2000 | 1.2553 |
| 1.3929 | 5.35 | 2100 | 1.2542 |
| 1.3884 | 5.6 | 2200 | 1.2529 |
| 1.3884 | 5.86 | 2300 | 1.2523 |
| 1.3821 | 6.11 | 2400 | 1.2520 |
| 1.3886 | 6.37 | 2500 | 1.2513 |
| 1.3865 | 6.62 | 2600 | 1.2510 |
| 1.3841 | 6.87 | 2700 | 1.2509 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Long1405/whisper-small-youtube-podcast-data | Long1405 | 2023-08-12T08:48:16Z | 114 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-08-12T05:15:30Z | ---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-small-youtube-podcast-data
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-youtube-podcast-data
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6011
- Wer: 60.8819
- Cer: 48.7902
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 0.0118 | 8.7 | 1000 | 0.5323 | 58.0111 | 45.8276 |
| 0.0014 | 17.39 | 2000 | 0.6011 | 60.8819 | 48.7902 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
jordyvl/vit-base_rvl-cdip-small_rvl_cdip-NK1000_og_simkd_rand | jordyvl | 2023-08-12T07:33:11Z | 162 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-08-11T21:27:31Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base_rvl-cdip-small_rvl_cdip-NK1000_og_simkd_rand
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base_rvl-cdip-small_rvl_cdip-NK1000_og_simkd_rand
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 267.1454
- Accuracy: 0.6807
- Brier Loss: 0.6059
- Nll: 2.5092
- F1 Micro: 0.6807
- F1 Macro: 0.6792
- Ece: 0.2988
- Aurc: 0.1779
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| 286.6037 | 1.0 | 1000 | 286.3978 | 0.242 | 1.0585 | 5.0180 | 0.242 | 0.1919 | 0.3885 | 0.6070 |
| 284.5917 | 2.0 | 2000 | 285.0526 | 0.235 | 1.4192 | 6.2048 | 0.235 | 0.1678 | 0.6914 | 0.6366 |
| 284.1567 | 3.0 | 3000 | 283.4989 | 0.3705 | 1.0456 | 4.7503 | 0.3705 | 0.2880 | 0.4669 | 0.4145 |
| 282.6679 | 4.0 | 4000 | 282.5618 | 0.4042 | 0.8940 | 4.1927 | 0.4042 | 0.3644 | 0.3629 | 0.3572 |
| 282.2283 | 5.0 | 5000 | 281.9135 | 0.418 | 0.9976 | 3.8856 | 0.418 | 0.3686 | 0.4631 | 0.3778 |
| 281.3193 | 6.0 | 6000 | 279.9180 | 0.4723 | 0.8755 | 3.4852 | 0.4723 | 0.4312 | 0.3960 | 0.2962 |
| 280.7993 | 7.0 | 7000 | 279.2325 | 0.5038 | 0.8411 | 3.3760 | 0.5038 | 0.4635 | 0.3844 | 0.2753 |
| 279.8249 | 8.0 | 8000 | 278.4682 | 0.5268 | 0.8078 | 3.1572 | 0.5268 | 0.4894 | 0.3705 | 0.2620 |
| 278.8243 | 9.0 | 9000 | 278.2146 | 0.5268 | 0.8245 | 3.2631 | 0.5268 | 0.5043 | 0.3819 | 0.2729 |
| 278.1676 | 10.0 | 10000 | 276.9399 | 0.5607 | 0.7853 | 3.0151 | 0.5607 | 0.5390 | 0.3741 | 0.2275 |
| 276.8185 | 11.0 | 11000 | 276.3879 | 0.5697 | 0.7659 | 2.9137 | 0.5697 | 0.5520 | 0.3660 | 0.2221 |
| 276.0937 | 12.0 | 12000 | 275.9589 | 0.5777 | 0.7626 | 2.9855 | 0.5777 | 0.5643 | 0.3606 | 0.2360 |
| 276.0743 | 13.0 | 13000 | 275.6118 | 0.5675 | 0.7938 | 3.2975 | 0.5675 | 0.5545 | 0.3852 | 0.2320 |
| 275.008 | 14.0 | 14000 | 275.0585 | 0.6 | 0.7359 | 2.8607 | 0.6 | 0.5861 | 0.3517 | 0.2142 |
| 274.483 | 15.0 | 15000 | 274.0515 | 0.6292 | 0.6738 | 2.7667 | 0.6292 | 0.6262 | 0.3215 | 0.1904 |
| 273.261 | 16.0 | 16000 | 273.7844 | 0.6312 | 0.6819 | 2.7219 | 0.6312 | 0.6296 | 0.3286 | 0.2048 |
| 272.9319 | 17.0 | 17000 | 273.4691 | 0.6198 | 0.7009 | 2.8745 | 0.6198 | 0.6160 | 0.3410 | 0.2134 |
| 272.456 | 18.0 | 18000 | 273.1716 | 0.6195 | 0.7071 | 2.8631 | 0.6195 | 0.6223 | 0.3440 | 0.2140 |
| 272.0481 | 19.0 | 19000 | 272.5084 | 0.6322 | 0.6864 | 2.7598 | 0.6322 | 0.6292 | 0.3362 | 0.2119 |
| 271.0429 | 20.0 | 20000 | 272.1741 | 0.6365 | 0.6830 | 2.8104 | 0.6365 | 0.6300 | 0.3345 | 0.2185 |
| 271.0098 | 21.0 | 21000 | 271.8972 | 0.649 | 0.6569 | 2.8558 | 0.649 | 0.6477 | 0.3221 | 0.2076 |
| 270.1226 | 22.0 | 22000 | 271.3564 | 0.639 | 0.6850 | 3.0353 | 0.639 | 0.6326 | 0.3372 | 0.2275 |
| 269.8644 | 23.0 | 23000 | 271.2604 | 0.6332 | 0.6903 | 2.9472 | 0.6332 | 0.6330 | 0.3400 | 0.2367 |
| 269.6737 | 24.0 | 24000 | 270.9163 | 0.6485 | 0.6622 | 2.8937 | 0.6485 | 0.6477 | 0.3258 | 0.2139 |
| 268.3083 | 25.0 | 25000 | 270.3471 | 0.6528 | 0.6590 | 2.7873 | 0.6528 | 0.6550 | 0.3231 | 0.2228 |
| 268.6058 | 26.0 | 26000 | 270.2531 | 0.659 | 0.6377 | 2.7500 | 0.659 | 0.6599 | 0.3125 | 0.1980 |
| 268.5694 | 27.0 | 27000 | 270.0281 | 0.6535 | 0.6510 | 2.7183 | 0.6535 | 0.6502 | 0.3210 | 0.2112 |
| 267.5742 | 28.0 | 28000 | 269.6303 | 0.664 | 0.6327 | 2.6630 | 0.664 | 0.6619 | 0.3109 | 0.1974 |
| 267.4235 | 29.0 | 29000 | 269.3493 | 0.6607 | 0.6417 | 2.7860 | 0.6607 | 0.6568 | 0.3162 | 0.2074 |
| 267.1017 | 30.0 | 30000 | 269.1249 | 0.675 | 0.6152 | 2.6205 | 0.675 | 0.6760 | 0.3013 | 0.1923 |
| 266.7395 | 31.0 | 31000 | 268.8958 | 0.6685 | 0.6281 | 2.7126 | 0.6685 | 0.6638 | 0.3086 | 0.1943 |
| 266.3374 | 32.0 | 32000 | 268.6245 | 0.6703 | 0.6224 | 2.7028 | 0.6703 | 0.6686 | 0.3065 | 0.1900 |
| 266.3529 | 33.0 | 33000 | 268.4537 | 0.6697 | 0.6240 | 2.6593 | 0.6697 | 0.6683 | 0.3066 | 0.1964 |
| 266.1322 | 34.0 | 34000 | 268.1314 | 0.678 | 0.6096 | 2.6485 | 0.678 | 0.6784 | 0.3008 | 0.1857 |
| 265.3824 | 35.0 | 35000 | 268.1505 | 0.6707 | 0.6242 | 2.5832 | 0.6707 | 0.6696 | 0.3058 | 0.1916 |
| 265.5754 | 36.0 | 36000 | 267.9319 | 0.676 | 0.6155 | 2.6208 | 0.676 | 0.6761 | 0.3014 | 0.1908 |
| 265.6115 | 37.0 | 37000 | 268.0886 | 0.679 | 0.6093 | 2.6068 | 0.679 | 0.6795 | 0.2991 | 0.1796 |
| 264.8437 | 38.0 | 38000 | 267.9896 | 0.6783 | 0.6113 | 2.5873 | 0.6783 | 0.6765 | 0.3000 | 0.1805 |
| 264.8028 | 39.0 | 39000 | 267.5381 | 0.68 | 0.6048 | 2.5007 | 0.68 | 0.6771 | 0.2974 | 0.1771 |
| 264.8063 | 40.0 | 40000 | 267.6070 | 0.6763 | 0.6127 | 2.5359 | 0.6763 | 0.6751 | 0.3030 | 0.1821 |
| 264.7481 | 41.0 | 41000 | 267.4914 | 0.6837 | 0.6000 | 2.5214 | 0.6837 | 0.6809 | 0.2942 | 0.1830 |
| 264.6455 | 42.0 | 42000 | 267.6581 | 0.6857 | 0.5968 | 2.5211 | 0.6857 | 0.6856 | 0.2919 | 0.1741 |
| 264.0388 | 43.0 | 43000 | 267.3815 | 0.6797 | 0.6035 | 2.5123 | 0.6797 | 0.6795 | 0.2973 | 0.1773 |
| 264.3585 | 44.0 | 44000 | 267.3548 | 0.6847 | 0.5997 | 2.5583 | 0.6847 | 0.6851 | 0.2943 | 0.1769 |
| 263.7822 | 45.0 | 45000 | 267.0005 | 0.682 | 0.6043 | 2.5023 | 0.682 | 0.6793 | 0.2966 | 0.1788 |
| 263.9765 | 46.0 | 46000 | 267.2113 | 0.6853 | 0.5955 | 2.5256 | 0.6853 | 0.6816 | 0.2922 | 0.1737 |
| 264.1576 | 47.0 | 47000 | 267.1731 | 0.6833 | 0.6002 | 2.5071 | 0.6833 | 0.6825 | 0.2951 | 0.1768 |
| 263.8688 | 48.0 | 48000 | 267.0122 | 0.6843 | 0.5980 | 2.5328 | 0.6843 | 0.6830 | 0.2942 | 0.1781 |
| 263.8963 | 49.0 | 49000 | 266.8628 | 0.6843 | 0.6021 | 2.5231 | 0.6843 | 0.6831 | 0.2957 | 0.1782 |
| 264.2061 | 50.0 | 50000 | 267.1454 | 0.6807 | 0.6059 | 2.5092 | 0.6807 | 0.6792 | 0.2988 | 0.1779 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Mitsua/vroid-diffusion-test | Mitsua | 2023-08-12T07:27:02Z | 52 | 0 | diffusers | [
"diffusers",
"text-to-image",
"dataset:Mitsua/vroid-image-dataset-lite",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-08-09T07:05:50Z | ---
license: other
datasets:
- Mitsua/vroid-image-dataset-lite
pipeline_tag: text-to-image
---
# Model Card for VRoid Diffusion
<!-- Provide a quick summary of what the model is/does. -->
This is a latent text-to-image diffusion model to demonstrate how U-Net training affects the generated images.
- Text Encoder is from [OpenCLIP ViT-H/14](https://github.com/mlfoundations/open_clip), MIT License, Training Data : LAION-2B
- VAE is from [Mitsua Diffusion One](https://huggingface.co/Mitsua/mitsua-diffusion-one), Mitsua Open RAIL-M License, Training Data: Public Domain/CC0 + Licensed
- U-Net is trained from scratch using full version of [VRoid Image Dataset Lite](https://huggingface.co/datasets/Mitsua/vroid-image-dataset-lite) with some modifications.
- VRoid is a trademark or registered trademark of Pixiv inc. in Japan and other regions.
## Model Details
- `vroid_diffusion_test.safetensors`
- base variant.
- `vroid_diffusion_test_invert_red_blue.safetensors`
- `red` and `blue` in the caption is swapped.
- `pink` and `skyblue` in the caption is swapped.
- `vroid_diffusion_test_monochrome.safetensors`
- all training images are converted to grayscale.
## Model Variant
- [VRoid Diffusion Unconditional](https://huggingface.co/Mitsua/vroid-diffusion-test-unconditional)
- This is unconditional image generator without CLIP.
### Model Description
- **Developed by:** Abstract Engine.
- **License:** Mitsua Open RAIL-M License.
## Uses
### Direct Use
Text-to-Image generation for research and educational purposes.
### Out-of-Scope Use
Any deployed use case of the model.
## Training Details
- Trained resolution : 256x256
- Batch Size : 48
- Steps : 45k
- LR : 1e-5 with warmup 1000 steps
### Training Data
We use full version of [VRoid Image Dataset Lite](https://huggingface.co/datasets/Mitsua/vroid-image-dataset-lite) with some modifications.
|
Subsets and Splits