modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-02 12:28:20
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 462
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-02 12:26:48
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
vineetsharma/ppo-LunarLander-v2 | vineetsharma | 2023-07-11T11:35:07Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-11T11:34:45Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 281.79 +/- 14.76
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Penbloom/Penbloom_semirealmix | Penbloom | 2023-07-11T11:34:24Z | 0 | 0 | null | [
"musclar",
"korean",
"license:openrail",
"region:us"
] | null | 2023-04-08T14:59:05Z | ---
license: openrail
tags:
- musclar
- korean
---
## Model Detail & Merge Recipes
Penbloom_semirealmix aims to create musclar girls with nice skin texture and detailed clothes.This is a ``merge`` model.
## Source model
[Civitai:Beenyou|Stable Diffusion Checkpoint](https://civitai.com/models/27688/beenyou)
[⚠NSFW][Civitai:饭特稀|Stable Diffusion Checkpoint](https://civitai.com/models/18427/v08))
### Penbloom_semirealmix_v1.0 |
vvasanth/falcon7b-finetune-test-220623_1 | vvasanth | 2023-07-11T11:31:41Z | 0 | 0 | null | [
"text-generation",
"license:apache-2.0",
"region:us"
] | text-generation | 2023-07-04T11:51:13Z | ---
license: apache-2.0
pipeline_tag: text-generation
--- |
gsaivinay/wizard-vicuna-13B-SuperHOT-8K-fp16 | gsaivinay | 2023-07-11T11:24:54Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"custom_code",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-07-11T11:22:25Z | ---
inference: false
license: other
duplicated_from: TheBloke/wizard-vicuna-13B-SuperHOT-8K-fp16
---
<!-- header start -->
<div style="width: 100%;">
Cloned from TheBloke repo
</div>
<!-- header end -->
# June Lee's Wizard Vicuna 13B fp16
This is fp16 pytorch format model files for [June Lee's Wizard Vicuna 13B](https://huggingface.co/TheBloke/wizard-vicuna-13B-HF) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test).
[Kaio Ken's SuperHOT 13b LoRA](https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`.
Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/wizard-vicuna-13B-SuperHOT-8K-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/wizard-vicuna-13B-SuperHOT-8K-GGML)
* [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/wizard-vicuna-13B-SuperHOT-8K-fp16)
* [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/junelee/wizard-vicuna-13b)
## How to use this model from Python code
First make sure you have Einops installed:
```
pip3 install auto-gptq
```
Then run the following code. `config.json` has been default to a sequence length of 8192, but you can also configure this in your Python code.
The provided modelling code, activated with `trust_remote_code=True` will automatically set the `scale` parameter from the configured `max_position_embeddings`. Eg for 8192, `scale` is set to `4`.
```python
from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM, pipeline
import argparse
model_name_or_path = "gsaivinay/wizard-vicuna-13B-SuperHOT-8K-fp16"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
config = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True)
# Change this to the sequence length you want
config.max_position_embeddings = 8192
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
config=config,
trust_remote_code=True,
device_map='auto')
# Note: check to confirm if this is correct prompt template is correct for this model!
prompt = "Tell me about AI"
prompt_template=f'''USER: {prompt}
ASSISTANT:'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
```
## Using other UIs: monkey patch
Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev.
It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest.
# Original model card: Kaio Ken's SuperHOT 8K
### SuperHOT Prototype 2 w/ 8K Context
This is a second prototype of SuperHOT, this time 30B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k).
Tests have shown that the model does indeed leverage the extended context at 8K.
You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192**
#### Looking for Merged & Quantized Models?
- 30B 4-bit CUDA: [tmpupload/superhot-30b-8k-4bit-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-safetensors)
- 30B 4-bit CUDA 128g: [tmpupload/superhot-30b-8k-4bit-128g-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-128g-safetensors)
#### Training Details
I trained the LoRA with the following configuration:
- 1200 samples (~400 samples over 2048 sequence length)
- learning rate of 3e-4
- 3 epochs
- The exported modules are:
- q_proj
- k_proj
- v_proj
- o_proj
- no bias
- Rank = 4
- Alpha = 8
- no dropout
- weight decay of 0.1
- AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5
- Trained on 4-bit base model
# Original model card: June Lee's Wizard Vicuna 13B
# Wizard-Vicuna-13B-HF
This is a float16 HF format repo for [junelee's wizard-vicuna 13B](https://huggingface.co/junelee/wizard-vicuna-13b).
June Lee's repo was also HF format. The reason I've made this is that the original repo was in float32, meaning it required 52GB disk space, VRAM and RAM.
This model was converted to float16 to make it easier to load and manage.
## Repositories available
* [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/wizard-vicuna-13B-GPTQ).
* [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/wizard-vicuna-13B-GGML).
* [float16 HF format model for GPU inference](https://huggingface.co/TheBloke/wizard-vicuna-13B-HF).
# Original WizardVicuna-13B model card
Github page: https://github.com/melodysdreamj/WizardVicunaLM
# WizardVicunaLM
### Wizard's dataset + ChatGPT's conversation extension + Vicuna's tuning method
I am a big fan of the ideas behind WizardLM and VicunaLM. I particularly like the idea of WizardLM handling the dataset itself more deeply and broadly, as well as VicunaLM overcoming the limitations of single-turn conversations by introducing multi-round conversations. As a result, I combined these two ideas to create WizardVicunaLM. This project is highly experimental and designed for proof of concept, not for actual usage.
## Benchmark
### Approximately 7% performance improvement over VicunaLM

### Detail
The questions presented here are not from rigorous tests, but rather, I asked a few questions and requested GPT-4 to score them. The models compared were ChatGPT 3.5, WizardVicunaLM, VicunaLM, and WizardLM, in that order.
| | gpt3.5 | wizard-vicuna-13b | vicuna-13b | wizard-7b | link |
|-----|--------|-------------------|------------|-----------|----------|
| Q1 | 95 | 90 | 85 | 88 | [link](https://sharegpt.com/c/YdhIlby) |
| Q2 | 95 | 97 | 90 | 89 | [link](https://sharegpt.com/c/YOqOV4g) |
| Q3 | 85 | 90 | 80 | 65 | [link](https://sharegpt.com/c/uDmrcL9) |
| Q4 | 90 | 85 | 80 | 75 | [link](https://sharegpt.com/c/XBbK5MZ) |
| Q5 | 90 | 85 | 80 | 75 | [link](https://sharegpt.com/c/AQ5tgQX) |
| Q6 | 92 | 85 | 87 | 88 | [link](https://sharegpt.com/c/eVYwfIr) |
| Q7 | 95 | 90 | 85 | 92 | [link](https://sharegpt.com/c/Kqyeub4) |
| Q8 | 90 | 85 | 75 | 70 | [link](https://sharegpt.com/c/M0gIjMF) |
| Q9 | 92 | 85 | 70 | 60 | [link](https://sharegpt.com/c/fOvMtQt) |
| Q10 | 90 | 80 | 75 | 85 | [link](https://sharegpt.com/c/YYiCaUz) |
| Q11 | 90 | 85 | 75 | 65 | [link](https://sharegpt.com/c/HMkKKGU) |
| Q12 | 85 | 90 | 80 | 88 | [link](https://sharegpt.com/c/XbW6jgB) |
| Q13 | 90 | 95 | 88 | 85 | [link](https://sharegpt.com/c/JXZb7y6) |
| Q14 | 94 | 89 | 90 | 91 | [link](https://sharegpt.com/c/cTXH4IS) |
| Q15 | 90 | 85 | 88 | 87 | [link](https://sharegpt.com/c/GZiM0Yt) |
| | 91 | 88 | 82 | 80 | |
## Principle
We adopted the approach of WizardLM, which is to extend a single problem more in-depth. However, instead of using individual instructions, we expanded it using Vicuna's conversation format and applied Vicuna's fine-tuning techniques.
Turning a single command into a rich conversation is what we've done [here](https://sharegpt.com/c/6cmxqq0).
After creating the training data, I later trained it according to the Vicuna v1.1 [training method](https://github.com/lm-sys/FastChat/blob/main/scripts/train_vicuna_13b.sh).
## Detailed Method
First, we explore and expand various areas in the same topic using the 7K conversations created by WizardLM. However, we made it in a continuous conversation format instead of the instruction format. That is, it starts with WizardLM's instruction, and then expands into various areas in one conversation using ChatGPT 3.5.
After that, we applied the following model using Vicuna's fine-tuning format.
## Training Process
Trained with 8 A100 GPUs for 35 hours.
## Weights
You can see the [dataset](https://huggingface.co/datasets/junelee/wizard_vicuna_70k) we used for training and the [13b model](https://huggingface.co/junelee/wizard-vicuna-13b) in the huggingface.
## Conclusion
If we extend the conversation to gpt4 32K, we can expect a dramatic improvement, as we can generate 8x more, more accurate and richer conversations.
## License
The model is licensed under the LLaMA model, and the dataset is licensed under the terms of OpenAI because it uses ChatGPT. Everything else is free.
## Author
[JUNE LEE](https://github.com/melodysdreamj) - He is active in Songdo Artificial Intelligence Study and GDG Songdo.
|
openwaifu/SoVits-VC-Chtholly-Nota-Seniorious-0.1 | openwaifu | 2023-07-11T11:19:42Z | 1 | 0 | transformers | [
"transformers",
"anime",
"audio",
"tts",
"voice conversion",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2023-04-17T12:07:10Z | ---
license: mit
tags:
- anime
- audio
- tts
- voice conversion
---
Origin (Generated From TTS):
<audio controls src="https://s3.amazonaws.com/moonup/production/uploads/62d3a59dc72c791b23918293/neVwV9PEc0gGylrEup2Kn.wav"></audio>
Converted (Using SoVits Chtholly-VC)
<audio controls src="https://s3.amazonaws.com/moonup/production/uploads/62d3a59dc72c791b23918293/oKNg3kVgAb7utyGCZa8f9.wav"></audio> |
1aurent/CartPole-v1 | 1aurent | 2023-07-11T11:15:03Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-11T10:42:02Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 498.08 +/- 19.05
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
jwu323/origin-llama-7b | jwu323 | 2023-07-11T11:06:24Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-11T09:17:15Z | This contains the original weights for the LLaMA-7b model.
This model is under a non-commercial license (see the LICENSE file).
You should only use this repository if you have been granted access to the model by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform) but either lost your copy of the weights or got some trouble converting them to the Transformers format.
[According to this comment](https://github.com/huggingface/transformers/issues/21681#issuecomment-1436552397), dtype of a model in PyTorch is always float32, regardless of the dtype of the checkpoint you saved. If you load a float16 checkpoint in a model you create (which is in float32 by default), the dtype that is kept at the end is the dtype of the model, not the dtype of the checkpoint. |
digiplay/BasilKorea_v2 | digiplay | 2023-07-11T11:00:49Z | 315 | 2 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-11T10:27:11Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
|
ashnrk/textual_inversion_pasture | ashnrk | 2023-07-11T10:54:44Z | 5 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:adapter:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-11T09:52:17Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - ashnrk/textual_inversion_pasture
These are textual inversion adaption weights for stabilityai/stable-diffusion-2-1. You can find some example images in the following.
|
nickw9/ppo-LunarLander-v2 | nickw9 | 2023-07-11T10:48:56Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-11T10:48:37Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ppo
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 256.15 +/- 10.89
name: mean_reward
verified: false
---
# **ppo** Agent playing **LunarLander-v2**
This is a trained model of a **ppo** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
digiplay/RealEpicMajicRevolution_v1 | digiplay | 2023-07-11T10:42:18Z | 393 | 1 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-11T09:48:27Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/107185/real-epic-majic-revolution
Original Author's DEMO images :


|
Winmodel/ML-Agents-Pyramids | Winmodel | 2023-07-11T10:36:07Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2023-07-11T10:36:05Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Winmodel/ML-Agents-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
mort1k/q-FrozenLake-v1-4x4-noSlippery | mort1k | 2023-07-11T10:35:42Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-11T10:35:41Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="mort1k/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
F-Haru/paraphrase-mpnet-base-v2_09-04-MarginMSELoss-finetuning-7-5 | F-Haru | 2023-07-11T10:29:25Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2023-07-11T09:35:14Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
ファインチューニングする時のNegative ja-en, en-jaのコサイン類似度が0.9以上0.4以下のみで
ファインチューニングをした後に、
教師モデルをparaphrase-mpnet-base-v2で知識蒸留をしたモデル
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1686 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 1000,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 514, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
miki-kawa/huggingdatavit-base-beans | miki-kawa | 2023-07-11T10:22:59Z | 193 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:beans",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-11T09:55:51Z | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
model-index:
- name: huggingdatavit-base-beans
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9924812030075187
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# huggingdatavit-base-beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0356
- Accuracy: 0.9925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1059 | 1.54 | 100 | 0.0356 | 0.9925 |
| 0.0256 | 3.08 | 200 | 0.0663 | 0.9774 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu117
- Datasets 2.13.0
- Tokenizers 0.11.0
|
Krish23/Tujgc | Krish23 | 2023-07-11T10:22:51Z | 0 | 0 | null | [
"license:cc-by-nc-sa-2.0",
"region:us"
] | null | 2023-07-11T10:22:51Z | ---
license: cc-by-nc-sa-2.0
---
|
bofenghuang/vigogne-7b-instruct | bofenghuang | 2023-07-11T10:18:13Z | 1,493 | 23 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"LLM",
"fr",
"license:openrail",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-03-22T21:36:45Z | ---
license: openrail
language:
- fr
pipeline_tag: text-generation
library_name: transformers
tags:
- llama
- LLM
inference: false
---
<p align="center" width="100%">
<img src="https://huggingface.co/bofenghuang/vigogne-7b-instruct/resolve/main/vigogne_logo.png" alt="Vigogne" style="width: 40%; min-width: 300px; display: block; margin: auto;">
</p>
# Vigogne-7B-Instruct: A French Instruction-following LLaMA Model
Vigogne-7B-Instruct is a LLaMA-7B model fine-tuned to follow the French instructions.
For more information, please visit the Github repo: https://github.com/bofenghuang/vigogne
**Usage and License Notices**: Same as [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca), Vigogne is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes.
## Changelog
All versions are available in branches.
- **V1.0**: Initial release, trained on the translated Stanford Alpaca dataset.
- **V1.1**: Improved translation quality of the Stanford Alpaca dataset.
- **V2.0**: Expanded training dataset to 224k for better performance.
- **V3.0**: Further expanded training dataset to 262k for improved results.
## Usage
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
from vigogne.preprocess import generate_instruct_prompt
model_name_or_path = "bofenghuang/vigogne-7b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, padding_side="right", use_fast=False)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, torch_dtype=torch.float16, device_map="auto")
user_query = "Expliquez la différence entre DoS et phishing."
prompt = generate_instruct_prompt(user_query)
input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(model.device)
input_length = input_ids.shape[1]
generated_outputs = model.generate(
input_ids=input_ids,
generation_config=GenerationConfig(
temperature=0.1,
do_sample=True,
repetition_penalty=1.0,
max_new_tokens=512,
),
return_dict_in_generate=True,
)
generated_tokens = generated_outputs.sequences[0, input_length:]
generated_text = tokenizer.decode(generated_tokens, skip_special_tokens=True)
print(generated_text)
```
You can also infer this model by using the following Google Colab Notebook.
<a href="https://colab.research.google.com/github/bofenghuang/vigogne/blob/main/notebooks/infer_instruct.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Limitations
Vigogne is still under development, and there are many limitations that have to be addressed. Please note that it is possible that the model generates harmful or biased content, incorrect information or generally unhelpful answers.
|
ivivnov/ppo-LunarLander-v2 | ivivnov | 2023-07-11T09:56:04Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-11T09:55:46Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 270.61 +/- 15.26
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
pierre-loic/climate-news-articles | pierre-loic | 2023-07-11T09:53:58Z | 111 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"flaubert",
"text-classification",
"license:cc",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-27T08:18:17Z | ---
license: cc
widget:
- text: "Nouveaux records d’émissions de CO₂ du secteur énergétique en 2022, selon une étude"
- text: "Climat et énergie : les objectifs de l’Union européenne pour 2030 ont du « plomb dans l’aile »"
- text: "Municipales à Paris : Emmanuel Grégoire « se prépare méthodiquement » pour l’après Hidalgo"
---
# 🌍 Détection des articles de presse française traitant des sujets liés au climat
*🇬🇧 / 🇺🇸 : as this model is trained only on French data, all explanations are written in French in this repository. The goal of the model is to classify titles of French newspapers in two categories : if it's about climate or not.*
## 🗺️ Le contexte
Ce modèle de classification de **titres d'article de presse française** a été réalisé pour l'association [Data for good](https://dataforgood.fr/) à Grenoble et plus particulièrement pour l'association [Quota climat](https://www.quotaclimat.org/).
L'objectif de ce modèle est de savoir si un **article de presse** traite du **sujet du climat** à partir de son **titre**. Cette tache est complexe car l'algorithme **n'a pas accès au contenu** des articles de presse. Néanmoins, à l'aide des modèles de langage basés sur les [tranformeurs](https://fr.wikipedia.org/wiki/Transformeur) et plus particulièrement les modèles basés sur une architecture [BERT](https://fr.wikipedia.org/wiki/BERT_(mod%C3%A8le_de_langage)), on peut obtenir des résultats intéressants. Nous avons étudié les **deux principaux modèles** basés sur cette architecture et entrainés sur des **corpus en français** : [FlauBERT](https://hal.science/hal-02784776v3/document) et [CamemBERT](https://camembert-model.fr/)
## 📋 L'utilisation du modèle final
Le modèle final présenté n'est évidemment **pas parfait** et possède des **biais**. En effet, certains choix du modèles peuvent être discutables : ceci provient du **périmètre de définition** de la notion de **climat**.
Pour tester le modèle avec le langage Python, il y a **deux solutions** :
- Soit en **téléchargeant le modèle** avec la bibliothèque Python [transformers](https://pypi.org/project/transformers/)
Pour tester le modèle, il suffit d'installer la bibliothèque Python [transformers](https://pypi.org/project/transformers/) dans un environnement virtuel et exécuter le code suivant :
```python
from transformers import pipeline
pipe = pipeline("text-classification", model="pierre-loic/climate-news-articles")
sentence = "Guerre en Ukraine, en direct : le président allemand appelle à ne pas « bloquer » Washington pour la livraison d’armes à sous-munitions"
print(pipe(sentence))
```
```
[{'label': 'NE TRAITE PAS DU CLIMAT', 'score': 0.6566330194473267}]
```
- Soit en appelant l'**API** d'Hugging Face avec la bibliothèque Python [requests](https://pypi.org/project/requests/)
Pour appeler l'**API** d'Hugging Face, il vous faut un **token** que vous pouvez récupérer dans votre espace personnel. Il ne vous plus qu'à exécuter le code suivant :
```python
import requests
API_URL = "https://api-inference.huggingface.co/models/pierre-loic/climate-news-articles"
headers = {"Authorization": "Bearer xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"}
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.json()
output = query({
"inputs": "Canicule : deux nouveaux départements du Sud-Est placés en vigilance orange lundi",
})
print(output)
```
```
[[{'label': 'TRAITE DU CLIMAT', 'score': 0.511335015296936}, {'label': 'NE TRAITE PAS DU CLIMAT', 'score': 0.48866504430770874}]]
```
## 🔎 Le détail du travail d'entrainement
### La méthodologie utilisée
Différentes pistes d'étude ont été explorées pour aboutir au modèle final :
- La **première piste** que nous avons étudiée est de faire prédire la classification des titres d'articles de presse entre "climat" et "pas climat" par [ChatGPT](https://openai.com/blog/chatgpt) grâce à du [prompt engineering](https://en.wikipedia.org/wiki/Prompt_engineering). Les résultats étaient assez intéressants mais le modèle se trompait parfois sur des cas très simples.
- La **deuxième piste** que nous avons étudiée est de vectoriser les mots des titres de presse par une méthode Tf-Idf et d'utiliser un modèle de classification ([régression logistique](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) et [random forest](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html)). Les résultats étaient légérement meilleurs qu'avec un dummy classifier (qui prédit toujours la classe majoritaire "Climat").
- La **troisième piste** que nous avons étudiée est de vectoriser les titres des articles de presse avec un modèle de type [BERT](https://fr.wikipedia.org/wiki/BERT_(mod%C3%A8le_de_langage)) ([camemBERT](https://camembert-model.fr/) uniquement entrainé sur un corpus francophone) et ensuite d'utiliser un modèle de classification ([régression logistique](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) et [random forest](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html)) sur les plongements. Les résultats étaient intéressants.
- La **quatrième piste** (celle qui a été retenue pour ce modèle) est de faire un fine-tuning d'un modèle de BERT (FlauBERT ou CamemBERT) pour la tache de classification.
### Les données
Les données proviennent d'une collecte de **titres d'articles de presse française** collectés durant plusieurs mois. Nous avons labellisé environ **2000 de ces titres** pour entrainer le modèle.
### Le modèle final
Le modèle retenu est un modèle de type FlauBERT avec un **fine-tuning** pour la **classification des articles de presse**. Les **données d'entrainement** ont été **sous-échantilonnées (undersampling)** pour équilibrer les classes.
### Les améliorations envisageables
Pour **améliorer le modèle**, il pourrait être intéressant d'**intégrer plus de données** sur les domaines où le modèle **se trompe le plus**. |
ashnrk/textual_inversion_industrial | ashnrk | 2023-07-11T09:52:03Z | 4 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:adapter:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-11T08:49:45Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - ashnrk/textual_inversion_industrial
These are textual inversion adaption weights for stabilityai/stable-diffusion-2-1. You can find some example images in the following.
|
Winmodel/ML-Agents-SnowballTarget | Winmodel | 2023-07-11T09:47:03Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2023-07-11T09:47:02Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Winmodel/ML-Agents-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
jalaluddin94/xlmr-nli-indoindo | jalaluddin94 | 2023-07-11T09:44:18Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-11T05:51:03Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: xlmr-nli-indoindo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr-nli-indoindo
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6699
- Accuracy: 0.7701
- Precision: 0.7701
- Recall: 0.7701
- F1: 0.7693
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.0444 | 1.0 | 1722 | 0.8481 | 0.6463 | 0.6463 | 0.6463 | 0.6483 |
| 0.7958 | 2.0 | 3444 | 0.7483 | 0.7369 | 0.7369 | 0.7369 | 0.7353 |
| 0.7175 | 3.0 | 5166 | 0.6812 | 0.7579 | 0.7579 | 0.7579 | 0.7576 |
| 0.66 | 4.0 | 6888 | 0.6293 | 0.7679 | 0.7679 | 0.7679 | 0.7674 |
| 0.6056 | 5.0 | 8610 | 0.6459 | 0.7651 | 0.7651 | 0.7651 | 0.7640 |
| 0.5769 | 6.0 | 10332 | 0.6699 | 0.7701 | 0.7701 | 0.7701 | 0.7693 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
F-Haru/09-04-MarginMSELoss-finetuning-7-5 | F-Haru | 2023-07-11T09:43:47Z | 2 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2023-07-11T08:30:56Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
ファインチューニングする時のNegative ja-en, en-jaのコサイン類似度が0.9以上0.4以下のみで
ファインチューニングをしたモデル
この後に知識蒸留したモデルはもう一つのモデルの方にある
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11912 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MarginMSELoss.MarginMSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 510, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jordyvl/vit-small_rvl_cdip_100_examples_per_class_kd_CEKD_t5.0_a0.9 | jordyvl | 2023-07-11T09:39:46Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-11T08:25:41Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-small_rvl_cdip_100_examples_per_class_kd_CEKD_t5.0_a0.9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-small_rvl_cdip_100_examples_per_class_kd_CEKD_t5.0_a0.9
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2897
- Accuracy: 0.635
- Brier Loss: 0.5186
- Nll: 2.9908
- F1 Micro: 0.635
- F1 Macro: 0.6391
- Ece: 0.1984
- Aurc: 0.1511
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 25 | 2.8799 | 0.12 | 0.9317 | 15.6566 | 0.12 | 0.1217 | 0.1503 | 0.8678 |
| No log | 2.0 | 50 | 2.2166 | 0.395 | 0.7576 | 9.4150 | 0.395 | 0.3645 | 0.2155 | 0.3726 |
| No log | 3.0 | 75 | 1.7821 | 0.505 | 0.6346 | 5.5305 | 0.505 | 0.4975 | 0.1755 | 0.2454 |
| No log | 4.0 | 100 | 1.6660 | 0.5275 | 0.6038 | 4.9669 | 0.5275 | 0.5333 | 0.1684 | 0.2324 |
| No log | 5.0 | 125 | 1.6118 | 0.54 | 0.5943 | 4.8266 | 0.54 | 0.5233 | 0.1947 | 0.2249 |
| No log | 6.0 | 150 | 1.7108 | 0.5275 | 0.6168 | 4.4308 | 0.5275 | 0.5247 | 0.2018 | 0.2418 |
| No log | 7.0 | 175 | 1.6465 | 0.5825 | 0.5721 | 4.8918 | 0.5825 | 0.5614 | 0.1887 | 0.1995 |
| No log | 8.0 | 200 | 1.6441 | 0.565 | 0.6040 | 4.2349 | 0.565 | 0.5591 | 0.1933 | 0.2216 |
| No log | 9.0 | 225 | 1.7054 | 0.565 | 0.6054 | 4.6348 | 0.565 | 0.5649 | 0.1845 | 0.2033 |
| No log | 10.0 | 250 | 1.6724 | 0.5375 | 0.6191 | 4.3502 | 0.5375 | 0.5257 | 0.1991 | 0.2223 |
| No log | 11.0 | 275 | 1.5397 | 0.57 | 0.5757 | 4.1311 | 0.57 | 0.5715 | 0.2079 | 0.1936 |
| No log | 12.0 | 300 | 1.7636 | 0.55 | 0.6394 | 5.0515 | 0.55 | 0.5376 | 0.2252 | 0.2268 |
| No log | 13.0 | 325 | 1.6080 | 0.575 | 0.5997 | 4.2707 | 0.575 | 0.5515 | 0.2048 | 0.1887 |
| No log | 14.0 | 350 | 1.7572 | 0.575 | 0.6205 | 4.6140 | 0.575 | 0.5705 | 0.2203 | 0.2342 |
| No log | 15.0 | 375 | 1.5604 | 0.58 | 0.5872 | 3.8633 | 0.58 | 0.5762 | 0.2089 | 0.1866 |
| No log | 16.0 | 400 | 1.6440 | 0.585 | 0.6042 | 4.2508 | 0.585 | 0.5940 | 0.2253 | 0.2182 |
| No log | 17.0 | 425 | 1.6117 | 0.5825 | 0.6057 | 4.2511 | 0.5825 | 0.5732 | 0.2299 | 0.1947 |
| No log | 18.0 | 450 | 1.5597 | 0.605 | 0.5732 | 4.4755 | 0.605 | 0.6028 | 0.2101 | 0.1721 |
| No log | 19.0 | 475 | 1.4177 | 0.6325 | 0.5429 | 3.4771 | 0.6325 | 0.6319 | 0.1930 | 0.1786 |
| 0.5354 | 20.0 | 500 | 1.5745 | 0.56 | 0.6076 | 3.6058 | 0.56 | 0.5643 | 0.2265 | 0.1898 |
| 0.5354 | 21.0 | 525 | 1.4907 | 0.6125 | 0.5682 | 3.9837 | 0.6125 | 0.6184 | 0.1981 | 0.1810 |
| 0.5354 | 22.0 | 550 | 1.4494 | 0.5925 | 0.5677 | 3.2864 | 0.5925 | 0.5906 | 0.2187 | 0.1670 |
| 0.5354 | 23.0 | 575 | 1.5608 | 0.62 | 0.5830 | 4.0132 | 0.62 | 0.6029 | 0.2286 | 0.1808 |
| 0.5354 | 24.0 | 600 | 1.5038 | 0.58 | 0.5957 | 3.6519 | 0.58 | 0.5956 | 0.2321 | 0.1879 |
| 0.5354 | 25.0 | 625 | 1.4094 | 0.615 | 0.5554 | 3.0313 | 0.615 | 0.6102 | 0.2180 | 0.1689 |
| 0.5354 | 26.0 | 650 | 1.4485 | 0.62 | 0.5712 | 3.3326 | 0.62 | 0.6181 | 0.2138 | 0.1729 |
| 0.5354 | 27.0 | 675 | 1.4156 | 0.6225 | 0.5621 | 3.2257 | 0.6225 | 0.6239 | 0.2158 | 0.1718 |
| 0.5354 | 28.0 | 700 | 1.3729 | 0.6275 | 0.5476 | 3.1300 | 0.6275 | 0.6285 | 0.2078 | 0.1620 |
| 0.5354 | 29.0 | 725 | 1.3671 | 0.6275 | 0.5337 | 3.4625 | 0.6275 | 0.6285 | 0.2177 | 0.1586 |
| 0.5354 | 30.0 | 750 | 1.3263 | 0.63 | 0.5380 | 3.2177 | 0.63 | 0.6338 | 0.2063 | 0.1577 |
| 0.5354 | 31.0 | 775 | 1.2991 | 0.6225 | 0.5223 | 3.0482 | 0.6225 | 0.6238 | 0.1940 | 0.1525 |
| 0.5354 | 32.0 | 800 | 1.3227 | 0.6325 | 0.5333 | 2.9622 | 0.6325 | 0.6351 | 0.1906 | 0.1554 |
| 0.5354 | 33.0 | 825 | 1.3077 | 0.63 | 0.5298 | 3.2060 | 0.63 | 0.6338 | 0.1933 | 0.1555 |
| 0.5354 | 34.0 | 850 | 1.3036 | 0.6225 | 0.5269 | 3.0431 | 0.6225 | 0.6242 | 0.1996 | 0.1535 |
| 0.5354 | 35.0 | 875 | 1.3057 | 0.6275 | 0.5263 | 2.9651 | 0.6275 | 0.6291 | 0.2023 | 0.1538 |
| 0.5354 | 36.0 | 900 | 1.2992 | 0.6275 | 0.5247 | 2.9748 | 0.6275 | 0.6289 | 0.1961 | 0.1518 |
| 0.5354 | 37.0 | 925 | 1.3001 | 0.6325 | 0.5252 | 2.9784 | 0.6325 | 0.6347 | 0.1978 | 0.1531 |
| 0.5354 | 38.0 | 950 | 1.2990 | 0.63 | 0.5229 | 2.9014 | 0.63 | 0.6327 | 0.1981 | 0.1524 |
| 0.5354 | 39.0 | 975 | 1.2995 | 0.6325 | 0.5246 | 2.9776 | 0.6325 | 0.6354 | 0.1946 | 0.1533 |
| 0.0336 | 40.0 | 1000 | 1.2945 | 0.6275 | 0.5226 | 2.9029 | 0.6275 | 0.6302 | 0.1965 | 0.1523 |
| 0.0336 | 41.0 | 1025 | 1.3023 | 0.63 | 0.5247 | 3.0515 | 0.63 | 0.6341 | 0.2044 | 0.1534 |
| 0.0336 | 42.0 | 1050 | 1.2990 | 0.635 | 0.5239 | 3.0673 | 0.635 | 0.6381 | 0.1952 | 0.1516 |
| 0.0336 | 43.0 | 1075 | 1.2962 | 0.635 | 0.5213 | 3.0585 | 0.635 | 0.6378 | 0.2055 | 0.1523 |
| 0.0336 | 44.0 | 1100 | 1.2991 | 0.625 | 0.5229 | 2.9801 | 0.625 | 0.6278 | 0.1954 | 0.1532 |
| 0.0336 | 45.0 | 1125 | 1.2949 | 0.6375 | 0.5222 | 3.0564 | 0.6375 | 0.6419 | 0.2027 | 0.1519 |
| 0.0336 | 46.0 | 1150 | 1.2989 | 0.6275 | 0.5228 | 3.0737 | 0.6275 | 0.6308 | 0.2075 | 0.1529 |
| 0.0336 | 47.0 | 1175 | 1.2902 | 0.6325 | 0.5201 | 3.0606 | 0.6325 | 0.6360 | 0.2099 | 0.1516 |
| 0.0336 | 48.0 | 1200 | 1.2971 | 0.6275 | 0.5217 | 3.0829 | 0.6275 | 0.6305 | 0.1882 | 0.1518 |
| 0.0336 | 49.0 | 1225 | 1.2913 | 0.63 | 0.5212 | 2.9853 | 0.63 | 0.6332 | 0.1928 | 0.1524 |
| 0.0336 | 50.0 | 1250 | 1.2917 | 0.63 | 0.5205 | 2.9850 | 0.63 | 0.6336 | 0.1910 | 0.1518 |
| 0.0336 | 51.0 | 1275 | 1.2928 | 0.63 | 0.5208 | 3.0579 | 0.63 | 0.6330 | 0.2020 | 0.1528 |
| 0.0336 | 52.0 | 1300 | 1.2941 | 0.635 | 0.5205 | 3.0647 | 0.635 | 0.6383 | 0.1919 | 0.1515 |
| 0.0336 | 53.0 | 1325 | 1.2930 | 0.635 | 0.5207 | 3.0637 | 0.635 | 0.6384 | 0.1868 | 0.1518 |
| 0.0336 | 54.0 | 1350 | 1.2918 | 0.63 | 0.5203 | 3.0628 | 0.63 | 0.6335 | 0.1986 | 0.1519 |
| 0.0336 | 55.0 | 1375 | 1.2894 | 0.635 | 0.5198 | 2.9874 | 0.635 | 0.6383 | 0.2026 | 0.1514 |
| 0.0336 | 56.0 | 1400 | 1.2913 | 0.63 | 0.5203 | 3.0691 | 0.63 | 0.6337 | 0.2045 | 0.1519 |
| 0.0336 | 57.0 | 1425 | 1.2923 | 0.6325 | 0.5205 | 2.9869 | 0.6325 | 0.6358 | 0.1962 | 0.1522 |
| 0.0336 | 58.0 | 1450 | 1.2927 | 0.6375 | 0.5199 | 3.0734 | 0.6375 | 0.6408 | 0.1905 | 0.1514 |
| 0.0336 | 59.0 | 1475 | 1.2931 | 0.6325 | 0.5204 | 3.0607 | 0.6325 | 0.6353 | 0.1980 | 0.1520 |
| 0.0236 | 60.0 | 1500 | 1.2911 | 0.6325 | 0.5199 | 3.0664 | 0.6325 | 0.6359 | 0.1875 | 0.1517 |
| 0.0236 | 61.0 | 1525 | 1.2901 | 0.635 | 0.5195 | 2.9877 | 0.635 | 0.6386 | 0.1907 | 0.1516 |
| 0.0236 | 62.0 | 1550 | 1.2913 | 0.635 | 0.5192 | 3.0655 | 0.635 | 0.6383 | 0.1971 | 0.1515 |
| 0.0236 | 63.0 | 1575 | 1.2920 | 0.635 | 0.5201 | 3.0044 | 0.635 | 0.6379 | 0.1991 | 0.1514 |
| 0.0236 | 64.0 | 1600 | 1.2911 | 0.635 | 0.5192 | 3.0654 | 0.635 | 0.6380 | 0.1848 | 0.1509 |
| 0.0236 | 65.0 | 1625 | 1.2924 | 0.635 | 0.5196 | 3.1438 | 0.635 | 0.6379 | 0.1969 | 0.1515 |
| 0.0236 | 66.0 | 1650 | 1.2901 | 0.635 | 0.5191 | 2.9928 | 0.635 | 0.6392 | 0.1978 | 0.1507 |
| 0.0236 | 67.0 | 1675 | 1.2911 | 0.6325 | 0.5189 | 3.0662 | 0.6325 | 0.6359 | 0.1896 | 0.1517 |
| 0.0236 | 68.0 | 1700 | 1.2911 | 0.6375 | 0.5193 | 2.9932 | 0.6375 | 0.6404 | 0.2017 | 0.1507 |
| 0.0236 | 69.0 | 1725 | 1.2893 | 0.635 | 0.5189 | 2.9907 | 0.635 | 0.6391 | 0.1951 | 0.1511 |
| 0.0236 | 70.0 | 1750 | 1.2913 | 0.6325 | 0.5195 | 2.9919 | 0.6325 | 0.6362 | 0.1955 | 0.1513 |
| 0.0236 | 71.0 | 1775 | 1.2899 | 0.635 | 0.5188 | 2.9899 | 0.635 | 0.6386 | 0.2049 | 0.1511 |
| 0.0236 | 72.0 | 1800 | 1.2912 | 0.635 | 0.5192 | 2.9914 | 0.635 | 0.6379 | 0.1924 | 0.1513 |
| 0.0236 | 73.0 | 1825 | 1.2898 | 0.6325 | 0.5188 | 2.9901 | 0.6325 | 0.6367 | 0.2059 | 0.1511 |
| 0.0236 | 74.0 | 1850 | 1.2902 | 0.635 | 0.5190 | 2.9918 | 0.635 | 0.6391 | 0.2069 | 0.1511 |
| 0.0236 | 75.0 | 1875 | 1.2904 | 0.635 | 0.5191 | 2.9916 | 0.635 | 0.6391 | 0.1969 | 0.1511 |
| 0.0236 | 76.0 | 1900 | 1.2905 | 0.635 | 0.5191 | 2.9899 | 0.635 | 0.6391 | 0.1969 | 0.1512 |
| 0.0236 | 77.0 | 1925 | 1.2904 | 0.635 | 0.5191 | 2.9917 | 0.635 | 0.6391 | 0.1926 | 0.1511 |
| 0.0236 | 78.0 | 1950 | 1.2899 | 0.635 | 0.5188 | 2.9909 | 0.635 | 0.6391 | 0.2010 | 0.1510 |
| 0.0236 | 79.0 | 1975 | 1.2900 | 0.635 | 0.5188 | 2.9908 | 0.635 | 0.6391 | 0.2034 | 0.1511 |
| 0.0233 | 80.0 | 2000 | 1.2900 | 0.635 | 0.5188 | 2.9910 | 0.635 | 0.6391 | 0.1967 | 0.1511 |
| 0.0233 | 81.0 | 2025 | 1.2900 | 0.635 | 0.5188 | 2.9911 | 0.635 | 0.6391 | 0.2002 | 0.1511 |
| 0.0233 | 82.0 | 2050 | 1.2901 | 0.635 | 0.5189 | 2.9909 | 0.635 | 0.6391 | 0.1993 | 0.1511 |
| 0.0233 | 83.0 | 2075 | 1.2900 | 0.635 | 0.5188 | 2.9906 | 0.635 | 0.6391 | 0.1937 | 0.1511 |
| 0.0233 | 84.0 | 2100 | 1.2901 | 0.635 | 0.5189 | 2.9917 | 0.635 | 0.6391 | 0.2026 | 0.1511 |
| 0.0233 | 85.0 | 2125 | 1.2899 | 0.635 | 0.5188 | 2.9905 | 0.635 | 0.6391 | 0.1993 | 0.1512 |
| 0.0233 | 86.0 | 2150 | 1.2897 | 0.635 | 0.5187 | 2.9906 | 0.635 | 0.6391 | 0.1976 | 0.1511 |
| 0.0233 | 87.0 | 2175 | 1.2899 | 0.635 | 0.5188 | 2.9905 | 0.635 | 0.6391 | 0.1980 | 0.1511 |
| 0.0233 | 88.0 | 2200 | 1.2897 | 0.635 | 0.5187 | 2.9911 | 0.635 | 0.6391 | 0.1957 | 0.1511 |
| 0.0233 | 89.0 | 2225 | 1.2899 | 0.635 | 0.5187 | 2.9910 | 0.635 | 0.6391 | 0.1970 | 0.1511 |
| 0.0233 | 90.0 | 2250 | 1.2898 | 0.635 | 0.5187 | 2.9905 | 0.635 | 0.6391 | 0.1988 | 0.1512 |
| 0.0233 | 91.0 | 2275 | 1.2897 | 0.635 | 0.5187 | 2.9908 | 0.635 | 0.6391 | 0.1961 | 0.1511 |
| 0.0233 | 92.0 | 2300 | 1.2898 | 0.635 | 0.5187 | 2.9908 | 0.635 | 0.6391 | 0.1966 | 0.1511 |
| 0.0233 | 93.0 | 2325 | 1.2897 | 0.635 | 0.5186 | 2.9908 | 0.635 | 0.6391 | 0.1984 | 0.1511 |
| 0.0233 | 94.0 | 2350 | 1.2898 | 0.635 | 0.5187 | 2.9907 | 0.635 | 0.6391 | 0.2009 | 0.1511 |
| 0.0233 | 95.0 | 2375 | 1.2897 | 0.635 | 0.5186 | 2.9908 | 0.635 | 0.6391 | 0.2023 | 0.1511 |
| 0.0233 | 96.0 | 2400 | 1.2897 | 0.635 | 0.5186 | 2.9908 | 0.635 | 0.6391 | 0.1985 | 0.1511 |
| 0.0233 | 97.0 | 2425 | 1.2897 | 0.635 | 0.5186 | 2.9908 | 0.635 | 0.6391 | 0.1984 | 0.1511 |
| 0.0233 | 98.0 | 2450 | 1.2897 | 0.635 | 0.5186 | 2.9908 | 0.635 | 0.6391 | 0.1985 | 0.1511 |
| 0.0233 | 99.0 | 2475 | 1.2897 | 0.635 | 0.5186 | 2.9909 | 0.635 | 0.6391 | 0.1984 | 0.1511 |
| 0.0232 | 100.0 | 2500 | 1.2897 | 0.635 | 0.5186 | 2.9908 | 0.635 | 0.6391 | 0.1984 | 0.1511 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.12.0
- Tokenizers 0.12.1
|
dsfsi/ss-en-m2m100-gov | dsfsi | 2023-07-11T09:39:30Z | 112 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"m2m_100",
"text2text-generation",
"m2m100",
"translation",
"africanlp",
"african",
"siswati",
"ss",
"en",
"arxiv:2303.03750",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-05-22T08:46:05Z | ---
license: cc-by-4.0
language:
- ss
- en
pipeline_tag: text2text-generation
tags:
- m2m100
- translation
- africanlp
- african
- siswati
---
# [ss-en] Siswati to English Translation Model based on M2M100 and The South African Gov-ZA multilingual corpus
Model created from Siswati to English aligned sentences from [The South African Gov-ZA multilingual corpus](https://github.com/dsfsi/gov-za-multilingual)
The data set contains cabinet statements from the South African government, maintained by the Government Communication and Information System (GCIS). Data was scraped from the governments website: https://www.gov.za/cabinet-statements
## Authors
- Vukosi Marivate - [@vukosi](https://twitter.com/vukosi)
- Matimba Shingange
- Richard Lastrucci
- Isheanesu Joseph Dzingirai
- Jenalea Rajab
## BibTeX entry and citation info
```
@inproceedings{lastrucci-etal-2023-preparing,
title = "Preparing the Vuk{'}uzenzele and {ZA}-gov-multilingual {S}outh {A}frican multilingual corpora",
author = "Richard Lastrucci and Isheanesu Dzingirai and Jenalea Rajab and Andani Madodonga and Matimba Shingange and Daniel Njini and Vukosi Marivate",
booktitle = "Proceedings of the Fourth workshop on Resources for African Indigenous Languages (RAIL 2023)",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.rail-1.3",
pages = "18--25"
}
```
[Paper - Preparing the Vuk'uzenzele and ZA-gov-multilingual South African multilingual corpora](https://arxiv.org/abs/2303.03750) |
subandwho/trial3 | subandwho | 2023-07-11T09:27:02Z | 3 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-11T09:26:58Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
RogerB/KinyaBERT-small-finetuned-kintweetsA | RogerB | 2023-07-11T09:18:46Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-11T09:18:09Z | ---
tags:
- generated_from_trainer
model-index:
- name: KinyaBERT-small-finetuned-kintweetsA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KinyaBERT-small-finetuned-kintweetsA
This model is a fine-tuned version of [jean-paul/KinyaBERT-small](https://huggingface.co/jean-paul/KinyaBERT-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8590
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.2458 | 1.0 | 60 | 5.0038 |
| 5.0197 | 2.0 | 120 | 5.1308 |
| 4.8906 | 3.0 | 180 | 4.8419 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ntranphong/my_setfit_model_redweasel | ntranphong | 2023-07-11T09:12:07Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | 2023-07-11T09:11:23Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# ntranphong/my_setfit_model_redweasel
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("ntranphong/my_setfit_model_redweasel")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
Bluishoul/grimoire-model | Bluishoul | 2023-07-11T08:55:00Z | 0 | 0 | transformers | [
"transformers",
"text-classification",
"dataset:Open-Orca/OpenOrca",
"doi:10.57967/hf/0873",
"license:openrail",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-30T02:45:26Z | ---
license: openrail
pipeline_tag: text-classification
library_name: transformers
datasets:
- Open-Orca/OpenOrca
--- |
zhundred/SpaceInvadersNoFrameskip-v4 | zhundred | 2023-07-11T08:52:32Z | 9 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-11T08:52:02Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 415.00 +/- 187.28
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga zhundred -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga zhundred -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga zhundred
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
NasimB/gpt2-concat-all-mod-datasets1-rarity-all-iorder-c13k | NasimB | 2023-07-11T08:45:58Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-11T07:01:32Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-all-mod-datasets1-rarity-all-iorder-c13k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-all-mod-datasets1-rarity-all-iorder-c13k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3983
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.7811 | 0.32 | 500 | 5.6598 |
| 5.4368 | 0.63 | 1000 | 5.2297 |
| 5.0819 | 0.95 | 1500 | 4.9819 |
| 4.8064 | 1.27 | 2000 | 4.8391 |
| 4.6653 | 1.58 | 2500 | 4.7273 |
| 4.5682 | 1.9 | 3000 | 4.6197 |
| 4.3541 | 2.22 | 3500 | 4.5701 |
| 4.2704 | 2.53 | 4000 | 4.5079 |
| 4.2264 | 2.85 | 4500 | 4.4351 |
| 4.051 | 3.17 | 5000 | 4.4290 |
| 3.9415 | 3.49 | 5500 | 4.3896 |
| 3.9311 | 3.8 | 6000 | 4.3596 |
| 3.8035 | 4.12 | 6500 | 4.3598 |
| 3.6487 | 4.44 | 7000 | 4.3523 |
| 3.6387 | 4.75 | 7500 | 4.3363 |
| 3.5857 | 5.07 | 8000 | 4.3408 |
| 3.4463 | 5.39 | 8500 | 4.3415 |
| 3.4459 | 5.7 | 9000 | 4.3420 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
KennethTM/gpt2-small-danish | KennethTM | 2023-07-11T08:37:00Z | 193 | 3 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"da",
"dataset:oscar",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-17T17:51:54Z | ---
datasets:
- oscar
language:
- da
widget:
- text: Der var engang
---
# What is this?
A GPT-2 model (small version, 124 M parameters) for Danish text generation. The model was not pre-trained from scratch but adapted from the English version.
# How to use
Test the model using the pipeline from the [🤗 Transformers](https://github.com/huggingface/transformers) library:
```python
from transformers import pipeline
generator = pipeline("text-generation", model = "KennethTM/gpt2-small-danish")
text = generator("Manden arbejdede som")
print(text[0]["generated_text"])
```
Or load it using the Auto* classes:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("KennethTM/gpt2-small-danish")
model = AutoModelForCausalLM.from_pretrained("KennethTM/gpt2-small-danish")
```
# Model training
The model is trained using the Danish part of the [oscar dataset](https://huggingface.co/datasets/oscar) ('unshuffled_deduplicated_da') and a context length of 1024 tokens.
The model weights are initialized from the English [GPT-2 small model](https://huggingface.co/gpt2) with new word token embeddings created for Danish using [WECHSEL](https://github.com/CPJKU/wechsel).
Initially, only the word token embeddings are trained using 50.000 samples. Finally, the whole model is trained using 1.000.000 samples.
For reference, the model achieves a perplexity of 33.5 on 5.000 random validation samples.
Model training is carried out on an 8 GB GPU.
# Notes
This is a pre-trained model, for optimal performance it should be finetuned for new tasks.
|
jordyvl/vit-small_rvl_cdip_100_examples_per_class_kd_CEKD_t5.0_a0.7 | jordyvl | 2023-07-11T08:25:06Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-11T07:11:58Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-small_rvl_cdip_100_examples_per_class_kd_CEKD_t5.0_a0.7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-small_rvl_cdip_100_examples_per_class_kd_CEKD_t5.0_a0.7
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2378
- Accuracy: 0.645
- Brier Loss: 0.4995
- Nll: 2.6600
- F1 Micro: 0.645
- F1 Macro: 0.6464
- Ece: 0.1850
- Aurc: 0.1447
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 25 | 3.1863 | 0.105 | 0.9328 | 15.2391 | 0.1050 | 0.1096 | 0.1551 | 0.8788 |
| No log | 2.0 | 50 | 2.4570 | 0.395 | 0.7500 | 9.2532 | 0.395 | 0.3662 | 0.1883 | 0.3593 |
| No log | 3.0 | 75 | 1.9474 | 0.51 | 0.6157 | 5.2483 | 0.51 | 0.4950 | 0.1693 | 0.2362 |
| No log | 4.0 | 100 | 1.8038 | 0.5375 | 0.5910 | 4.7704 | 0.5375 | 0.5412 | 0.1672 | 0.2240 |
| No log | 5.0 | 125 | 1.7706 | 0.5425 | 0.6043 | 4.4142 | 0.5425 | 0.5313 | 0.1961 | 0.2262 |
| No log | 6.0 | 150 | 1.6182 | 0.58 | 0.5399 | 3.8940 | 0.58 | 0.5814 | 0.1548 | 0.1768 |
| No log | 7.0 | 175 | 1.6199 | 0.6025 | 0.5494 | 3.7722 | 0.6025 | 0.6047 | 0.1571 | 0.1815 |
| No log | 8.0 | 200 | 1.6354 | 0.585 | 0.5620 | 4.3106 | 0.585 | 0.5782 | 0.2067 | 0.1958 |
| No log | 9.0 | 225 | 1.8421 | 0.555 | 0.6076 | 5.4885 | 0.555 | 0.5516 | 0.1995 | 0.2339 |
| No log | 10.0 | 250 | 1.8780 | 0.545 | 0.6302 | 5.0672 | 0.545 | 0.5457 | 0.2036 | 0.2356 |
| No log | 11.0 | 275 | 1.4752 | 0.59 | 0.5450 | 3.4210 | 0.59 | 0.5985 | 0.1751 | 0.1817 |
| No log | 12.0 | 300 | 1.4825 | 0.615 | 0.5332 | 3.3838 | 0.615 | 0.6180 | 0.1764 | 0.1727 |
| No log | 13.0 | 325 | 1.4550 | 0.6325 | 0.5238 | 3.3565 | 0.6325 | 0.6264 | 0.1702 | 0.1607 |
| No log | 14.0 | 350 | 1.4558 | 0.6025 | 0.5424 | 3.2294 | 0.6025 | 0.6060 | 0.1850 | 0.1709 |
| No log | 15.0 | 375 | 1.4164 | 0.6225 | 0.5239 | 3.4651 | 0.6225 | 0.6149 | 0.1797 | 0.1727 |
| No log | 16.0 | 400 | 1.4977 | 0.5975 | 0.5490 | 4.1918 | 0.5975 | 0.5901 | 0.1918 | 0.1761 |
| No log | 17.0 | 425 | 1.4744 | 0.605 | 0.5490 | 3.7221 | 0.605 | 0.5971 | 0.1955 | 0.1752 |
| No log | 18.0 | 450 | 1.5371 | 0.6225 | 0.5563 | 3.9267 | 0.6225 | 0.6194 | 0.1946 | 0.1713 |
| No log | 19.0 | 475 | 1.3703 | 0.61 | 0.5230 | 2.9363 | 0.61 | 0.6115 | 0.1808 | 0.1606 |
| 0.6508 | 20.0 | 500 | 1.3942 | 0.625 | 0.5353 | 3.7288 | 0.625 | 0.6218 | 0.1949 | 0.1549 |
| 0.6508 | 21.0 | 525 | 1.3539 | 0.62 | 0.5281 | 3.2632 | 0.62 | 0.6256 | 0.2058 | 0.1554 |
| 0.6508 | 22.0 | 550 | 1.3411 | 0.6525 | 0.5040 | 3.4382 | 0.6525 | 0.6462 | 0.1740 | 0.1522 |
| 0.6508 | 23.0 | 575 | 1.3133 | 0.62 | 0.5073 | 3.1716 | 0.62 | 0.6213 | 0.1804 | 0.1497 |
| 0.6508 | 24.0 | 600 | 1.4132 | 0.6275 | 0.5343 | 3.4836 | 0.6275 | 0.6311 | 0.1808 | 0.1635 |
| 0.6508 | 25.0 | 625 | 1.4322 | 0.6275 | 0.5464 | 2.9913 | 0.6275 | 0.6374 | 0.1949 | 0.1747 |
| 0.6508 | 26.0 | 650 | 1.4199 | 0.615 | 0.5482 | 3.2476 | 0.615 | 0.6183 | 0.1977 | 0.1705 |
| 0.6508 | 27.0 | 675 | 1.3493 | 0.6275 | 0.5250 | 3.5747 | 0.6275 | 0.6239 | 0.2046 | 0.1518 |
| 0.6508 | 28.0 | 700 | 1.2954 | 0.635 | 0.5078 | 3.0855 | 0.635 | 0.6355 | 0.1787 | 0.1475 |
| 0.6508 | 29.0 | 725 | 1.3715 | 0.6375 | 0.5270 | 3.3421 | 0.6375 | 0.6254 | 0.1888 | 0.1591 |
| 0.6508 | 30.0 | 750 | 1.3038 | 0.645 | 0.5160 | 3.2790 | 0.645 | 0.6443 | 0.1859 | 0.1543 |
| 0.6508 | 31.0 | 775 | 1.3311 | 0.6375 | 0.5259 | 3.0953 | 0.6375 | 0.6364 | 0.1899 | 0.1593 |
| 0.6508 | 32.0 | 800 | 1.2487 | 0.6375 | 0.4942 | 2.9030 | 0.6375 | 0.6406 | 0.1822 | 0.1424 |
| 0.6508 | 33.0 | 825 | 1.2838 | 0.645 | 0.5096 | 2.8108 | 0.645 | 0.6448 | 0.1845 | 0.1532 |
| 0.6508 | 34.0 | 850 | 1.2788 | 0.6525 | 0.5103 | 2.8377 | 0.6525 | 0.6524 | 0.2013 | 0.1505 |
| 0.6508 | 35.0 | 875 | 1.2478 | 0.6425 | 0.5011 | 2.6533 | 0.6425 | 0.6432 | 0.1735 | 0.1435 |
| 0.6508 | 36.0 | 900 | 1.2420 | 0.6375 | 0.5030 | 2.5071 | 0.6375 | 0.6399 | 0.1853 | 0.1461 |
| 0.6508 | 37.0 | 925 | 1.2406 | 0.6375 | 0.4992 | 2.5840 | 0.6375 | 0.6391 | 0.1795 | 0.1456 |
| 0.6508 | 38.0 | 950 | 1.2493 | 0.645 | 0.5035 | 2.5959 | 0.645 | 0.6463 | 0.1905 | 0.1461 |
| 0.6508 | 39.0 | 975 | 1.2446 | 0.6425 | 0.5029 | 2.6545 | 0.6425 | 0.6441 | 0.1943 | 0.1445 |
| 0.0591 | 40.0 | 1000 | 1.2471 | 0.6525 | 0.5005 | 2.5163 | 0.6525 | 0.6529 | 0.1830 | 0.1460 |
| 0.0591 | 41.0 | 1025 | 1.2420 | 0.635 | 0.5009 | 2.5884 | 0.635 | 0.6371 | 0.1842 | 0.1448 |
| 0.0591 | 42.0 | 1050 | 1.2471 | 0.6475 | 0.5016 | 2.6730 | 0.6475 | 0.6476 | 0.1905 | 0.1463 |
| 0.0591 | 43.0 | 1075 | 1.2452 | 0.635 | 0.5036 | 2.5784 | 0.635 | 0.6373 | 0.1786 | 0.1466 |
| 0.0591 | 44.0 | 1100 | 1.2404 | 0.6475 | 0.4999 | 2.5804 | 0.6475 | 0.6468 | 0.1757 | 0.1448 |
| 0.0591 | 45.0 | 1125 | 1.2443 | 0.64 | 0.5025 | 2.5843 | 0.64 | 0.6425 | 0.1852 | 0.1457 |
| 0.0591 | 46.0 | 1150 | 1.2429 | 0.6425 | 0.5001 | 2.5071 | 0.6425 | 0.6441 | 0.1886 | 0.1454 |
| 0.0591 | 47.0 | 1175 | 1.2450 | 0.645 | 0.5028 | 2.5860 | 0.645 | 0.6460 | 0.1957 | 0.1453 |
| 0.0591 | 48.0 | 1200 | 1.2391 | 0.6375 | 0.4993 | 2.6594 | 0.6375 | 0.6379 | 0.1802 | 0.1456 |
| 0.0591 | 49.0 | 1225 | 1.2421 | 0.6425 | 0.5006 | 2.5857 | 0.6425 | 0.6428 | 0.1933 | 0.1450 |
| 0.0591 | 50.0 | 1250 | 1.2413 | 0.6425 | 0.5007 | 2.6657 | 0.6425 | 0.6432 | 0.1861 | 0.1455 |
| 0.0591 | 51.0 | 1275 | 1.2399 | 0.645 | 0.4995 | 2.5804 | 0.645 | 0.6469 | 0.1949 | 0.1448 |
| 0.0591 | 52.0 | 1300 | 1.2425 | 0.645 | 0.5013 | 2.5908 | 0.645 | 0.6442 | 0.1766 | 0.1448 |
| 0.0591 | 53.0 | 1325 | 1.2407 | 0.64 | 0.5006 | 2.5801 | 0.64 | 0.6415 | 0.1818 | 0.1458 |
| 0.0591 | 54.0 | 1350 | 1.2402 | 0.6425 | 0.5004 | 2.6583 | 0.6425 | 0.6451 | 0.1967 | 0.1452 |
| 0.0591 | 55.0 | 1375 | 1.2394 | 0.645 | 0.5000 | 2.5852 | 0.645 | 0.6464 | 0.1829 | 0.1446 |
| 0.0591 | 56.0 | 1400 | 1.2391 | 0.6425 | 0.4999 | 2.5903 | 0.6425 | 0.6444 | 0.1902 | 0.1449 |
| 0.0591 | 57.0 | 1425 | 1.2384 | 0.6475 | 0.4994 | 2.5864 | 0.6475 | 0.6483 | 0.1935 | 0.1446 |
| 0.0591 | 58.0 | 1450 | 1.2409 | 0.6425 | 0.5007 | 2.5842 | 0.6425 | 0.6450 | 0.1868 | 0.1451 |
| 0.0591 | 59.0 | 1475 | 1.2389 | 0.6425 | 0.4999 | 2.5848 | 0.6425 | 0.6444 | 0.1845 | 0.1447 |
| 0.0363 | 60.0 | 1500 | 1.2391 | 0.6425 | 0.4998 | 2.6608 | 0.6425 | 0.6443 | 0.1823 | 0.1449 |
| 0.0363 | 61.0 | 1525 | 1.2393 | 0.6475 | 0.5002 | 2.6602 | 0.6475 | 0.6484 | 0.1966 | 0.1446 |
| 0.0363 | 62.0 | 1550 | 1.2385 | 0.6425 | 0.4994 | 2.5912 | 0.6425 | 0.6427 | 0.1932 | 0.1448 |
| 0.0363 | 63.0 | 1575 | 1.2396 | 0.6425 | 0.5003 | 2.6605 | 0.6425 | 0.6444 | 0.1909 | 0.1450 |
| 0.0363 | 64.0 | 1600 | 1.2388 | 0.6425 | 0.4996 | 2.6609 | 0.6425 | 0.6443 | 0.1862 | 0.1449 |
| 0.0363 | 65.0 | 1625 | 1.2387 | 0.645 | 0.5000 | 2.6604 | 0.645 | 0.6465 | 0.1826 | 0.1446 |
| 0.0363 | 66.0 | 1650 | 1.2390 | 0.645 | 0.4998 | 2.5910 | 0.645 | 0.6464 | 0.1868 | 0.1447 |
| 0.0363 | 67.0 | 1675 | 1.2388 | 0.6425 | 0.4999 | 2.6605 | 0.6425 | 0.6444 | 0.1803 | 0.1448 |
| 0.0363 | 68.0 | 1700 | 1.2387 | 0.6425 | 0.4996 | 2.6608 | 0.6425 | 0.6444 | 0.1845 | 0.1448 |
| 0.0363 | 69.0 | 1725 | 1.2388 | 0.6475 | 0.4999 | 2.6597 | 0.6475 | 0.6484 | 0.1878 | 0.1445 |
| 0.0363 | 70.0 | 1750 | 1.2387 | 0.645 | 0.4997 | 2.6601 | 0.645 | 0.6465 | 0.1870 | 0.1448 |
| 0.0363 | 71.0 | 1775 | 1.2382 | 0.6425 | 0.4996 | 2.6606 | 0.6425 | 0.6444 | 0.1954 | 0.1448 |
| 0.0363 | 72.0 | 1800 | 1.2387 | 0.645 | 0.4998 | 2.6595 | 0.645 | 0.6465 | 0.1866 | 0.1447 |
| 0.0363 | 73.0 | 1825 | 1.2381 | 0.645 | 0.4996 | 2.6602 | 0.645 | 0.6464 | 0.1838 | 0.1446 |
| 0.0363 | 74.0 | 1850 | 1.2384 | 0.6425 | 0.4996 | 2.6605 | 0.6425 | 0.6444 | 0.1908 | 0.1449 |
| 0.0363 | 75.0 | 1875 | 1.2384 | 0.6425 | 0.4997 | 2.6601 | 0.6425 | 0.6443 | 0.1876 | 0.1449 |
| 0.0363 | 76.0 | 1900 | 1.2383 | 0.645 | 0.4996 | 2.6602 | 0.645 | 0.6464 | 0.1881 | 0.1447 |
| 0.0363 | 77.0 | 1925 | 1.2383 | 0.645 | 0.4997 | 2.6601 | 0.645 | 0.6464 | 0.1851 | 0.1447 |
| 0.0363 | 78.0 | 1950 | 1.2382 | 0.6425 | 0.4996 | 2.6601 | 0.6425 | 0.6443 | 0.1882 | 0.1448 |
| 0.0363 | 79.0 | 1975 | 1.2381 | 0.645 | 0.4996 | 2.6600 | 0.645 | 0.6464 | 0.1854 | 0.1447 |
| 0.036 | 80.0 | 2000 | 1.2381 | 0.6425 | 0.4996 | 2.6603 | 0.6425 | 0.6443 | 0.1882 | 0.1448 |
| 0.036 | 81.0 | 2025 | 1.2382 | 0.645 | 0.4996 | 2.6601 | 0.645 | 0.6464 | 0.1854 | 0.1447 |
| 0.036 | 82.0 | 2050 | 1.2380 | 0.6425 | 0.4996 | 2.6601 | 0.6425 | 0.6443 | 0.1942 | 0.1448 |
| 0.036 | 83.0 | 2075 | 1.2380 | 0.645 | 0.4996 | 2.6602 | 0.645 | 0.6464 | 0.1884 | 0.1447 |
| 0.036 | 84.0 | 2100 | 1.2379 | 0.645 | 0.4995 | 2.6601 | 0.645 | 0.6464 | 0.1849 | 0.1447 |
| 0.036 | 85.0 | 2125 | 1.2380 | 0.6425 | 0.4996 | 2.6600 | 0.6425 | 0.6443 | 0.1895 | 0.1449 |
| 0.036 | 86.0 | 2150 | 1.2381 | 0.645 | 0.4996 | 2.6601 | 0.645 | 0.6464 | 0.1870 | 0.1447 |
| 0.036 | 87.0 | 2175 | 1.2379 | 0.6425 | 0.4995 | 2.6601 | 0.6425 | 0.6443 | 0.1925 | 0.1449 |
| 0.036 | 88.0 | 2200 | 1.2379 | 0.645 | 0.4995 | 2.6600 | 0.645 | 0.6464 | 0.1900 | 0.1447 |
| 0.036 | 89.0 | 2225 | 1.2379 | 0.645 | 0.4995 | 2.6601 | 0.645 | 0.6464 | 0.1850 | 0.1447 |
| 0.036 | 90.0 | 2250 | 1.2379 | 0.645 | 0.4995 | 2.6599 | 0.645 | 0.6464 | 0.1900 | 0.1447 |
| 0.036 | 91.0 | 2275 | 1.2378 | 0.6425 | 0.4995 | 2.6600 | 0.6425 | 0.6443 | 0.1875 | 0.1448 |
| 0.036 | 92.0 | 2300 | 1.2379 | 0.645 | 0.4996 | 2.6600 | 0.645 | 0.6464 | 0.1850 | 0.1447 |
| 0.036 | 93.0 | 2325 | 1.2379 | 0.645 | 0.4995 | 2.6600 | 0.645 | 0.6464 | 0.1850 | 0.1447 |
| 0.036 | 94.0 | 2350 | 1.2378 | 0.645 | 0.4995 | 2.6599 | 0.645 | 0.6464 | 0.1850 | 0.1447 |
| 0.036 | 95.0 | 2375 | 1.2378 | 0.645 | 0.4995 | 2.6600 | 0.645 | 0.6464 | 0.1850 | 0.1447 |
| 0.036 | 96.0 | 2400 | 1.2378 | 0.645 | 0.4995 | 2.6600 | 0.645 | 0.6464 | 0.1850 | 0.1447 |
| 0.036 | 97.0 | 2425 | 1.2378 | 0.645 | 0.4995 | 2.6600 | 0.645 | 0.6464 | 0.1850 | 0.1447 |
| 0.036 | 98.0 | 2450 | 1.2378 | 0.645 | 0.4995 | 2.6600 | 0.645 | 0.6464 | 0.1850 | 0.1447 |
| 0.036 | 99.0 | 2475 | 1.2378 | 0.645 | 0.4995 | 2.6600 | 0.645 | 0.6464 | 0.1850 | 0.1447 |
| 0.036 | 100.0 | 2500 | 1.2378 | 0.645 | 0.4995 | 2.6600 | 0.645 | 0.6464 | 0.1850 | 0.1447 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.12.0
- Tokenizers 0.12.1
|
Yuhan123/ppo | Yuhan123 | 2023-07-11T08:04:50Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-11T08:04:29Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 262.21 +/- 31.43
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nitzankarby/my-ppo-lunarLander-model | nitzankarby | 2023-07-11T08:01:01Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-11T07:47:28Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 244.39 +/- 13.39
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
hw2942/Erlangshen-Longformer-110M-finetuning-wallstreetcn-morning-news-close-000001.SH-v2 | hw2942 | 2023-07-11T07:46:55Z | 87 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"longformer",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-11T07:23:21Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Erlangshen-Longformer-110M-finetuning-wallstreetcn-morning-news-close-000001.SH-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Erlangshen-Longformer-110M-finetuning-wallstreetcn-morning-news-close-000001.SH-v2
This model is a fine-tuned version of [IDEA-CCNL/Erlangshen-Longformer-110M](https://huggingface.co/IDEA-CCNL/Erlangshen-Longformer-110M) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6936
- Accuracy: 0.44
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 38 | 0.6894 | 0.56 |
| No log | 2.0 | 76 | 0.7096 | 0.44 |
| No log | 3.0 | 114 | 0.6915 | 0.56 |
| No log | 4.0 | 152 | 0.6872 | 0.56 |
| No log | 5.0 | 190 | 0.6881 | 0.56 |
| No log | 6.0 | 228 | 0.7079 | 0.44 |
| No log | 7.0 | 266 | 0.7026 | 0.44 |
| No log | 8.0 | 304 | 0.6914 | 0.56 |
| No log | 9.0 | 342 | 0.7012 | 0.44 |
| No log | 10.0 | 380 | 0.6936 | 0.44 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
sipablo/gatau | sipablo | 2023-07-11T07:41:52Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-11T07:41:52Z | ---
license: creativeml-openrail-m
---
|
nolanaatama/tknshkrhllvnrvcv2dclkd44 | nolanaatama | 2023-07-11T07:32:03Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-11T07:25:16Z | ---
license: creativeml-openrail-m
---
|
merve/sam-finetuned | merve | 2023-07-11T06:50:29Z | 74 | 0 | transformers | [
"transformers",
"tf",
"sam",
"mask-generation",
"generated_from_keras_callback",
"base_model:facebook/sam-vit-base",
"base_model:finetune:facebook/sam-vit-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | mask-generation | 2023-07-11T06:11:49Z | ---
license: apache-2.0
base_model: facebook/sam-vit-base
tags:
- generated_from_keras_callback
model-index:
- name: sam-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# sam-finetuned
## Model description
This model is a fine-tuned version of [facebook/sam-vit-base](https://huggingface.co/facebook/sam-vit-base) on an breast cancer dataset. It is not trained for production but for Keras example.
## Training procedure
### Training hyperparameters
The model was trained for 20 epochs.
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
You can see an example inference below.

### Framework versions
- Transformers 4.31.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
hongrui/mammogram_v_2_2_2 | hongrui | 2023-07-11T06:49:08Z | 4 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-07-10T23:17:48Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - hongrui/mammogram_v_2_2_2
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the hongrui/mammogram_v_1 dataset. You can find some example images in the following.




|
ashnrk/textual_inversion_forest | ashnrk | 2023-07-11T06:44:44Z | 21 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:adapter:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-11T05:42:48Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - ashnrk/textual_inversion_forest
These are textual inversion adaption weights for stabilityai/stable-diffusion-2-1. You can find some example images in the following.
|
Winmodel/CartPole-v1 | Winmodel | 2023-07-11T06:39:24Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-11T05:40:07Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
bfriederich/distilbert-base-uncased-news-trained | bfriederich | 2023-07-11T06:37:27Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:ag_news",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-07T20:04:31Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- ag_news
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-news-trained
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: ag_news
type: ag_news
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9194736842105263
- name: F1
type: f1
value: 0.9195099897221968
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-news-trained
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the ag_news dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2420
- Accuracy: 0.9195
- F1: 0.9195
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.346 | 1.0 | 469 | 0.2511 | 0.9142 | 0.9142 |
| 0.1874 | 2.0 | 938 | 0.2420 | 0.9195 | 0.9195 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
hongrui/mammogram_v_2_2_1 | hongrui | 2023-07-11T06:21:13Z | 6 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-07-10T22:48:47Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - hongrui/mammogram_v_2_2_1
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the hongrui/mammogram_v_1 dataset. You can find some example images in the following.




|
joon09/kor-naver-ner-name-v2 | joon09 | 2023-07-11T05:59:40Z | 146 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-05-24T03:58:09Z | ---
tags:
- generated_from_trainer
model-index:
- name: kor-naver-ner-name-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kor-naver-ner-name-v2
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0+cu117
- Datasets 2.2.2
- Tokenizers 0.13.3
|
mrovejaxd/ABL_d | mrovejaxd | 2023-07-11T05:58:29Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-11T05:04:55Z | ---
tags:
- generated_from_trainer
model-index:
- name: ABL_d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ABL_d
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cpu
- Datasets 2.12.0
- Tokenizers 0.13.2
|
saillab/Nepali_Alpaca_ChatGPT_33B | saillab | 2023-07-11T05:46:37Z | 0 | 0 | null | [
"region:us"
] | null | 2023-07-09T05:08:20Z | # Nepali-Chat-GPT 33B 🇳🇵
This repo consist of the demo and the necessary guidelines for creating the open source Nepali Alpaca ChatGPT.
As per our knowledge this is the first open-source large language model created specifically for **Nepali Language**.
## Model
**🇳🇵Nepali Alpaca Chat-GPT-33B**: For this model we finetuned the timdettmers/guanaco-33b-merged with Nepali-Alpaca-GPT4 dataset.
**🇳🇵33B Model Generated Output**
```
### Instruction: आर्टिफिसियल इन्टेलिजेन्स भनेको के हो
### Output: आर्टिफिसियल इन्टेलिजेन्स, सामान्यतया एआई भनेर चिनिन्छ, कम्प्युटर विज्ञानको एउटा शाखा हो जसले मानिसले जस्तै सोच्न र काम गर्न सक्ने कम्प्युटर प्रणालीहरू विकास गर्ने प्रक्रियालाई बुझाउँछ। यसमा मेसिन लर्निङ, न्यूरल नेटवर्कहरू, र प्राकृतिक भाषा प्रशोधन जस्ता विभिन्न प्रविधिहरू समावेश हुन्छन् जसले कम्प्युटरहरूलाई स्पष्ट रूपमा प्रोग्राम नगरी सिक्न र समस्याहरू समाधान गर्न सक्षम बनाउन मद्दत गर्दछ।
### Instruction: वातावरण कसरी जोगाउने ?
### Output: त्यहाँ धेरै तरिकाहरू छन् जुन हामी वातावरण जोगाउन सक्छौं। यहाँ केही सुझावहरू छन् 1. घटाउनुहोस्, पुन: प्रयोग गर्नुहोस्, र पुन: प्रयोग गर्नुहोस् घटाउनुहोस्, पुन: प्रयोग गर्नुहोस्, र पुन: प्रयोग गर्नुहोस् भनेको फोहोर उत्पादन घटाउने, पुन: प्रयोग गर्ने, र पुन: प्रयोग गर्ने सबैभन्दा प्रभावकारी तरिकाहरू मध्ये एक हो। यसले घटाउने, पुन: प्रयोग गर्ने र पुन: प्रयोग गर्नेलाई प्रोत्साहन दिन्छ र वातावरणमा नकारात्मक प्रभावहरू कम गर्न मद्दत गर्दछ। २. ऊर्जा खपत घटाउनुहोस्: ऊर्जा खपत घटाउनु भनेको ऊर्जा-कुशल उपकरणहरू प्रयोग गर्नु, प्रयोगमा नभएको बेला बत्तीहरू र इलेक्ट्रोनिक्सहरू बन्द गर्नु, र प्रयोगमा नभएको इलेक्ट्रोनिक्स र उपकरणहरू पुन: प्रयोग गर्ने जस्ता कदमहरू चाल्नु हो। यसले ऊर्जा खपत घटाउने र वातावरणमा नकारात्मक प्रभावहरू घटाउने एक प्रभावकारी तरिका हो। 3. एकल-प्रयोग प्लास्टिक घटाउनुहोस्: एकल-प्रयोग प्लास्टिक घटाउनु भनेको एकल-प्रयोग प्लास्टिक सामग्रीहरू प्रयोग गर्नु अघि उनीहरूलाई पुन: प्रयोग गर्ने
```
### How to contribute?
We highly encourage you to contribute to this open-source project. Please feel free to fill this Google Form: https://forms.gle/CRN1KJsjNpNsfSR66 |
saillab/Nepali_Alpaca_ChatGPT_13B | saillab | 2023-07-11T05:44:42Z | 0 | 0 | null | [
"region:us"
] | null | 2023-07-06T21:36:45Z | # Nepali-Chat-GPT 13B 🇳🇵
This repo consist of the demo and the necessary guidelines for creating the open source Nepali Alpaca ChatGPT.
As per our knowledge this is the first open-source large language model created specifically for **Nepali Language**.
## Model
**🇳🇵Nepali Alpaca Chat-GPT-13B**: We finetuned the TheBloke/stable-vicuna-13B-GPTQ model with the Nepali-Alapaca-GPT4 dataset.
**🇳🇵13B Model Generated Output**
```
### Instruction: कृपया मेसिन लर्निङको व्याख्या गर्नुहोस्।
### Output: मेसिन लर्निङ भनेको डाटाबाट सिक्ने कम्प्युटर-आधारित अनुप्रयोगहरू जस्तै मेसिन लर्निङ संरचनाहरू द्वारा प्रशिक्षण गरिएको डेटा प्रयोग गरी सञ्चालन गर्ने विशेष प्रकारको कम्प्युटर प्रणाली हो। यस प्रकारको कम्प्युटर प्रणालीले स्थानान्तरण र प्रशिक्षण गर्न डेटा प्रयोग गर्छ, र यसको कार्यसम्पादनलाई अनुमानित रूपमा सुधार गर्न सक्छ। मेसिन लर्निङ प्रणालीहरूले उनीहरूको लिङ्क र अनुप्रयोगहरू सिक्ने र प्रशिक्षण गर्ने डेटामा सुधार गर्न सक्छन्, सामान्यतया व्यक्तिगत प्रविधिहरू जस्तै चिकनहरू पहिचान गर्ने र इन्स्ट्रुक्टरहरू व्यवहार गर्ने समावेश गर्दछ। मेसिन लर्निङ प्रणालीहरू विभिन्न उद्योगहरूमा फाइदा पुर्याउन सक्छन्, जस्तै स्वास्थ्य सेवा, उद्योग, र वित्त, र यसलाई धेरै प्रयोगकर्ता अनुभवहरू र स्रोतहरू प्रयोग गरेर विकास गरिएको छ।
### Instruction: प्राकृतिक स्रोतहरू र संरक्षण गर्न प्रविधिहरू प्रवर्द्धन गर्नुहोस्।`
### Output: यसमा ऊर्जा परिणाम बढाउने टेक्नोलोजी, सौर्य ईन्धन र फोहोर उपकरणहरू, र नवीकरणीय आविष्कार र प्रविधिहरू समावेश छन्। रोजगार र विकासमा अन्तर्राष्ट्रिय व्यवस्था र अन्तर्राष्ट्रिय घटना रणनीतिहरू लागू गर्नुहोस्। २. पारिस्थितिक र फसलहरूको लागि बाहिर निकाल्नुहोस्। जनसंख्या, उत्सर्जन, र कार्बन उत्सर्जन कम गर्ने सहयोग र अन्तर्राष्ट्रिय साझेदारीहरूको माध्यमबाट सम्बोधन गर्नुहोस्। 3. मानव संसाधनहरूमा दक्षता प्रवर्द्धन गर्नुहोस्। समुद्री र जलीय पारिस्थितिहरूको सुरक्षा, संकुचन, र स्रोतहरू रोक्न मानव व्यवहार र प्रविधिहरूको प्रयोग सुधार गर्नुहोस्। 4. नवीकरणीय प्रविधि र उपकरणहरूको अनुप्रयोग बढाउनुहोस्। सार्वजनिक यातायात, नवीकरणीय ऊर्जा स्रोतहरू, र नवीकरणीय इन्धन स्रोतहरू समावेश छन्।`
```
### How to contribute?
We highly encourage you to contribute to this open-source project. Please feel free to fill this Google Form: https://forms.gle/CRN1KJsjNpNsfSR66 |
saillab/Nepali_Alpaca_ChatGPT_7B | saillab | 2023-07-11T05:37:08Z | 0 | 0 | null | [
"region:us"
] | null | 2023-07-10T04:29:09Z | # Nepali-Chat-GPT 7B 🇳🇵
This repo consist of the demo and the necessary guidelines for creating the open source Nepali Alpaca ChatGPT.
As per our knowledge this is the first open-source large language model created specifically for **Nepali Language**.
## Model
**🇳🇵Nepali Alpaca Chat-GPT-7B**: We have finetuned the timdettmers/guanaco-7b model with the Nepali-Alpaca-GPT4 dataset.
HuggingFace Link: https://huggingface.co/saillab/Nepali_Alpaca_ChatGPT_7B
**🇳🇵7B Model Generated Output**
```
### Instruction:आर्टिफिशियल इन्टेलिजेन्सको व्याख्या गर्नुहोस्।
### Output: आर्टिफिसियल इन्टेलिजेन्स, प्रायः एआई भनेर चिनिन्छ, कम्प्युटर प्रणालीहरूको विकास हो जसले सामान्यतया मानव बुद्धिको आवश्यकता पर्ने कार्यहरू गर्न सक्छ। यसले कम्प्युटरहरूलाई स्पष्ट रूपमा प्रोग्राम नगरी सिक्न र सिक्न सक्ने क्षमता प्रदान गर्दछ, जसले तिनीहरूलाई स्वचालित रूपमा समस्याहरू समाधान गर्न र निर्णय गर्न सक्षम बनाउँछ। यसले कम्प्युटर प्रणालीहरूलाई मानिसले जस्तै सोच्न र निर्णय गर्न सक्षम बनाउँछ। यसले स्वास्थ्य सेवा उद्योगमा क्रान्तिकारी परिवर्तन गर्ने क्षमता प्रदान गर्दछ
### Instruction: विश्वका दुर्लभ भाषाहरूको संरक्षणको महत्त्व तीन वाक्यमा व्याख्या गर्नुहोस्।
### Output: विश्वका दुर्लभ भाषाहरूको संरक्षण धेरै कारणहरूको लागि महत्त्वपूर्ण छ। पहिलो, यसले संस्कृतिको विविधतालाई बढावा दिन्छ र विश्वभरका मानिसहरूलाई विभिन्न भाषाहरू सिक्न अनुमति दिन्छ। दोस्रो, संरक्षित भाषाहरूको संरक्षणले विगतका भाषाहरूलाई जोगाउँछ जुन अझै पनि प्रायः प्रयोग हुन्छन्। तेस्रो, संरक्षित भाषाहरूको संरक्षणले विभिन्न संस्कृतिका सांस्कृतिक सामग्रीहरूको संरक्षण र संरक्षण गर्दछ, जसले संस्कृतिको विविधतालाई बढावा दिन्छ।
```
### How to contribute?
We highly encourage you to contribute to this open-source project. Please feel free to fill this Google Form: https://forms.gle/CRN1KJsjNpNsfSR66 |
gfx-labs/xlm-roberta-base-finetuned-panx-hindi | gfx-labs | 2023-07-11T05:27:20Z | 136 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-07-11T05:02:51Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-hi
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.hi
split: validation
args: PAN-X.hi
metrics:
- name: F1
type: f1
value: 0.875
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-hi
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2334
- F1: 0.875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6369 | 1.0 | 188 | 0.2775 | 0.8157 |
| 0.2751 | 2.0 | 376 | 0.2537 | 0.8402 |
| 0.1737 | 3.0 | 564 | 0.2359 | 0.8606 |
| 0.1188 | 4.0 | 752 | 0.2334 | 0.875 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Sigwang/pegasus-samsum | Sigwang | 2023-07-11T05:25:51Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-11T04:18:02Z | ---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4848
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6909 | 0.54 | 500 | 1.4848 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
JennnDexter/dreambooth | JennnDexter | 2023-07-11T05:17:33Z | 29 | 0 | diffusers | [
"diffusers",
"tensorboard",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-06-15T07:49:24Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - JennnDexter/dreambooth
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
lovelyxs/Reinforce-CartPole-v1 | lovelyxs | 2023-07-11T05:06:07Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-11T05:05:55Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
retroai818/ppo-LunarLander-v2 | retroai818 | 2023-07-11T04:08:19Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-11T00:27:35Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 255.62 +/- 26.09
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Mocoto23/distilbert-base-uncased-finetuned-cola | Mocoto23 | 2023-07-11T04:03:38Z | 61 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-11T02:45:37Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Mocoto23/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Mocoto23/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1895
- Validation Loss: 0.5414
- Train Matthews Correlation: 0.5167
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5216 | 0.4673 | 0.4507 | 0 |
| 0.3159 | 0.4683 | 0.4925 | 1 |
| 0.1895 | 0.5414 | 0.5167 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
LuisFelipe11/ppo-Huggy | LuisFelipe11 | 2023-07-11T03:58:42Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-07-11T03:58:39Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: LuisFelipe11/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
SpringYung/falcon_with_10latex_v2 | SpringYung | 2023-07-11T03:42:42Z | 2 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-11T03:41:43Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
alsonlai/dqn-SpaceInvadersNoFrameskip-v4 | alsonlai | 2023-07-11T03:37:58Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-11T03:37:27Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 499.50 +/- 146.57
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga alsonlai -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga alsonlai -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga alsonlai
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
zwtharry/Taxiv3 | zwtharry | 2023-07-11T03:29:10Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-11T03:29:08Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxiv3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="zwtharry/Taxiv3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jordyvl/vit-small_rvl_cdip_100_examples_per_class_kd_CEKD_t2.5_a0.5 | jordyvl | 2023-07-11T03:27:58Z | 163 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-11T02:15:08Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-small_rvl_cdip_100_examples_per_class_kd_CEKD_t2.5_a0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-small_rvl_cdip_100_examples_per_class_kd_CEKD_t2.5_a0.5
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4583
- Accuracy: 0.655
- Brier Loss: 0.4857
- Nll: 2.9372
- F1 Micro: 0.655
- F1 Macro: 0.6591
- Ece: 0.1679
- Aurc: 0.1394
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 25 | 4.2264 | 0.1375 | 0.9289 | 15.9084 | 0.1375 | 0.1395 | 0.1536 | 0.8596 |
| No log | 2.0 | 50 | 3.2078 | 0.405 | 0.7396 | 8.9647 | 0.405 | 0.3723 | 0.2073 | 0.3570 |
| No log | 3.0 | 75 | 2.4477 | 0.4975 | 0.6180 | 5.3439 | 0.4975 | 0.4756 | 0.1714 | 0.2421 |
| No log | 4.0 | 100 | 2.2058 | 0.545 | 0.5825 | 4.3028 | 0.545 | 0.5448 | 0.1681 | 0.2147 |
| No log | 5.0 | 125 | 2.1459 | 0.5325 | 0.6143 | 4.3798 | 0.5325 | 0.5164 | 0.2012 | 0.2274 |
| No log | 6.0 | 150 | 2.0457 | 0.5825 | 0.5625 | 4.1921 | 0.5825 | 0.5823 | 0.1712 | 0.2008 |
| No log | 7.0 | 175 | 1.9438 | 0.575 | 0.5557 | 4.2405 | 0.575 | 0.5654 | 0.1805 | 0.1894 |
| No log | 8.0 | 200 | 1.9821 | 0.5675 | 0.5766 | 3.8326 | 0.5675 | 0.5665 | 0.1815 | 0.2050 |
| No log | 9.0 | 225 | 2.1566 | 0.5425 | 0.6068 | 4.2488 | 0.5425 | 0.5367 | 0.2053 | 0.2167 |
| No log | 10.0 | 250 | 1.9672 | 0.5925 | 0.5692 | 4.3417 | 0.5925 | 0.5968 | 0.2005 | 0.2114 |
| No log | 11.0 | 275 | 2.0417 | 0.5725 | 0.6080 | 3.6972 | 0.5725 | 0.5608 | 0.2005 | 0.2168 |
| No log | 12.0 | 300 | 1.9432 | 0.585 | 0.5704 | 3.6005 | 0.585 | 0.5840 | 0.1976 | 0.1939 |
| No log | 13.0 | 325 | 1.9031 | 0.585 | 0.5816 | 4.0984 | 0.585 | 0.5835 | 0.1996 | 0.1911 |
| No log | 14.0 | 350 | 1.8994 | 0.5925 | 0.5897 | 4.2703 | 0.5925 | 0.5926 | 0.2211 | 0.2041 |
| No log | 15.0 | 375 | 1.8136 | 0.6325 | 0.5297 | 4.5861 | 0.6325 | 0.6299 | 0.1622 | 0.1578 |
| No log | 16.0 | 400 | 1.6961 | 0.5925 | 0.5300 | 4.0317 | 0.5925 | 0.5839 | 0.1909 | 0.1630 |
| No log | 17.0 | 425 | 1.7687 | 0.61 | 0.5357 | 3.6514 | 0.61 | 0.6110 | 0.1715 | 0.1703 |
| No log | 18.0 | 450 | 1.8963 | 0.6 | 0.5785 | 4.7474 | 0.6 | 0.5842 | 0.2168 | 0.1893 |
| No log | 19.0 | 475 | 1.7545 | 0.6175 | 0.5506 | 4.4192 | 0.6175 | 0.6086 | 0.2006 | 0.1759 |
| 0.8611 | 20.0 | 500 | 1.7832 | 0.61 | 0.5546 | 4.0543 | 0.61 | 0.6099 | 0.2133 | 0.1662 |
| 0.8611 | 21.0 | 525 | 1.7788 | 0.5875 | 0.5718 | 3.8585 | 0.5875 | 0.5855 | 0.2084 | 0.1848 |
| 0.8611 | 22.0 | 550 | 1.6323 | 0.62 | 0.5184 | 3.6953 | 0.62 | 0.6146 | 0.1921 | 0.1588 |
| 0.8611 | 23.0 | 575 | 1.6384 | 0.6325 | 0.5431 | 3.5349 | 0.6325 | 0.6269 | 0.2042 | 0.1678 |
| 0.8611 | 24.0 | 600 | 1.7895 | 0.62 | 0.5588 | 4.2768 | 0.62 | 0.6169 | 0.1993 | 0.1885 |
| 0.8611 | 25.0 | 625 | 1.5712 | 0.6175 | 0.5111 | 3.1891 | 0.6175 | 0.6199 | 0.1777 | 0.1552 |
| 0.8611 | 26.0 | 650 | 1.6139 | 0.62 | 0.5284 | 3.0912 | 0.62 | 0.6238 | 0.1793 | 0.1599 |
| 0.8611 | 27.0 | 675 | 1.6449 | 0.6375 | 0.5190 | 4.0147 | 0.6375 | 0.6313 | 0.1794 | 0.1606 |
| 0.8611 | 28.0 | 700 | 1.6379 | 0.6325 | 0.5355 | 3.5225 | 0.6325 | 0.6300 | 0.1859 | 0.1693 |
| 0.8611 | 29.0 | 725 | 1.5486 | 0.6375 | 0.5202 | 3.1611 | 0.6375 | 0.6407 | 0.1908 | 0.1608 |
| 0.8611 | 30.0 | 750 | 1.5410 | 0.63 | 0.5074 | 3.2562 | 0.63 | 0.6340 | 0.1772 | 0.1424 |
| 0.8611 | 31.0 | 775 | 1.5033 | 0.6575 | 0.4973 | 3.3321 | 0.6575 | 0.6619 | 0.1802 | 0.1451 |
| 0.8611 | 32.0 | 800 | 1.6065 | 0.6375 | 0.5260 | 3.4264 | 0.6375 | 0.6451 | 0.2028 | 0.1670 |
| 0.8611 | 33.0 | 825 | 1.5188 | 0.6525 | 0.5028 | 3.5128 | 0.6525 | 0.6536 | 0.1813 | 0.1491 |
| 0.8611 | 34.0 | 850 | 1.5034 | 0.635 | 0.5005 | 3.4093 | 0.635 | 0.6345 | 0.1602 | 0.1506 |
| 0.8611 | 35.0 | 875 | 1.5711 | 0.66 | 0.5163 | 3.6591 | 0.66 | 0.6587 | 0.1884 | 0.1574 |
| 0.8611 | 36.0 | 900 | 1.5224 | 0.6475 | 0.5057 | 3.1773 | 0.6475 | 0.6491 | 0.1802 | 0.1526 |
| 0.8611 | 37.0 | 925 | 1.4781 | 0.6475 | 0.4938 | 3.3389 | 0.6475 | 0.6508 | 0.1753 | 0.1420 |
| 0.8611 | 38.0 | 950 | 1.4991 | 0.65 | 0.5005 | 3.4077 | 0.65 | 0.6541 | 0.1843 | 0.1482 |
| 0.8611 | 39.0 | 975 | 1.4613 | 0.6625 | 0.4848 | 3.2461 | 0.6625 | 0.6675 | 0.1647 | 0.1386 |
| 0.0907 | 40.0 | 1000 | 1.4824 | 0.64 | 0.4951 | 3.1830 | 0.64 | 0.6444 | 0.1779 | 0.1431 |
| 0.0907 | 41.0 | 1025 | 1.5224 | 0.6625 | 0.5004 | 3.4231 | 0.6625 | 0.6659 | 0.1769 | 0.1506 |
| 0.0907 | 42.0 | 1050 | 1.4882 | 0.6375 | 0.5013 | 3.0893 | 0.6375 | 0.6451 | 0.1844 | 0.1465 |
| 0.0907 | 43.0 | 1075 | 1.4852 | 0.665 | 0.4901 | 3.4025 | 0.665 | 0.6685 | 0.1869 | 0.1442 |
| 0.0907 | 44.0 | 1100 | 1.4744 | 0.65 | 0.4934 | 3.4829 | 0.65 | 0.6528 | 0.1836 | 0.1426 |
| 0.0907 | 45.0 | 1125 | 1.4735 | 0.66 | 0.4892 | 3.1763 | 0.66 | 0.6642 | 0.1666 | 0.1427 |
| 0.0907 | 46.0 | 1150 | 1.4690 | 0.65 | 0.4898 | 3.0960 | 0.65 | 0.6537 | 0.1642 | 0.1427 |
| 0.0907 | 47.0 | 1175 | 1.4773 | 0.6475 | 0.4909 | 3.2535 | 0.6475 | 0.6506 | 0.1749 | 0.1446 |
| 0.0907 | 48.0 | 1200 | 1.4632 | 0.6575 | 0.4884 | 3.1685 | 0.6575 | 0.6625 | 0.1750 | 0.1398 |
| 0.0907 | 49.0 | 1225 | 1.4712 | 0.66 | 0.4896 | 3.0915 | 0.66 | 0.6634 | 0.1697 | 0.1432 |
| 0.0907 | 50.0 | 1250 | 1.4630 | 0.655 | 0.4883 | 3.0953 | 0.655 | 0.6591 | 0.1650 | 0.1406 |
| 0.0907 | 51.0 | 1275 | 1.4607 | 0.66 | 0.4860 | 3.0153 | 0.66 | 0.6653 | 0.1665 | 0.1411 |
| 0.0907 | 52.0 | 1300 | 1.4646 | 0.6475 | 0.4889 | 3.0242 | 0.6475 | 0.6510 | 0.1713 | 0.1426 |
| 0.0907 | 53.0 | 1325 | 1.4717 | 0.6575 | 0.4904 | 3.0926 | 0.6575 | 0.6605 | 0.1789 | 0.1428 |
| 0.0907 | 54.0 | 1350 | 1.4554 | 0.645 | 0.4868 | 3.0882 | 0.645 | 0.6489 | 0.1664 | 0.1408 |
| 0.0907 | 55.0 | 1375 | 1.4581 | 0.6575 | 0.4855 | 3.0904 | 0.6575 | 0.6614 | 0.1602 | 0.1404 |
| 0.0907 | 56.0 | 1400 | 1.4588 | 0.655 | 0.4866 | 3.0910 | 0.655 | 0.6598 | 0.1722 | 0.1405 |
| 0.0907 | 57.0 | 1425 | 1.4582 | 0.6575 | 0.4859 | 3.0143 | 0.6575 | 0.6619 | 0.1540 | 0.1397 |
| 0.0907 | 58.0 | 1450 | 1.4613 | 0.6575 | 0.4865 | 3.0143 | 0.6575 | 0.6620 | 0.1659 | 0.1402 |
| 0.0907 | 59.0 | 1475 | 1.4593 | 0.655 | 0.4867 | 3.0140 | 0.655 | 0.6599 | 0.1583 | 0.1402 |
| 0.0478 | 60.0 | 1500 | 1.4593 | 0.655 | 0.4864 | 3.0148 | 0.655 | 0.6593 | 0.1657 | 0.1404 |
| 0.0478 | 61.0 | 1525 | 1.4588 | 0.655 | 0.4861 | 3.0165 | 0.655 | 0.6590 | 0.1757 | 0.1401 |
| 0.0478 | 62.0 | 1550 | 1.4598 | 0.6575 | 0.4864 | 3.0140 | 0.6575 | 0.6616 | 0.1528 | 0.1403 |
| 0.0478 | 63.0 | 1575 | 1.4595 | 0.6575 | 0.4865 | 3.0143 | 0.6575 | 0.6623 | 0.1538 | 0.1400 |
| 0.0478 | 64.0 | 1600 | 1.4591 | 0.655 | 0.4864 | 2.9404 | 0.655 | 0.6591 | 0.1669 | 0.1399 |
| 0.0478 | 65.0 | 1625 | 1.4568 | 0.655 | 0.4854 | 2.9393 | 0.655 | 0.6596 | 0.1644 | 0.1393 |
| 0.0478 | 66.0 | 1650 | 1.4569 | 0.655 | 0.4855 | 3.0146 | 0.655 | 0.6599 | 0.1619 | 0.1401 |
| 0.0478 | 67.0 | 1675 | 1.4592 | 0.655 | 0.4865 | 2.9380 | 0.655 | 0.6596 | 0.1540 | 0.1399 |
| 0.0478 | 68.0 | 1700 | 1.4580 | 0.66 | 0.4858 | 2.9406 | 0.66 | 0.6641 | 0.1850 | 0.1396 |
| 0.0478 | 69.0 | 1725 | 1.4591 | 0.655 | 0.4865 | 2.9381 | 0.655 | 0.6593 | 0.1651 | 0.1399 |
| 0.0478 | 70.0 | 1750 | 1.4586 | 0.655 | 0.4859 | 2.9388 | 0.655 | 0.6596 | 0.1773 | 0.1397 |
| 0.0478 | 71.0 | 1775 | 1.4585 | 0.6525 | 0.4862 | 2.9366 | 0.6525 | 0.6566 | 0.1644 | 0.1400 |
| 0.0478 | 72.0 | 1800 | 1.4582 | 0.66 | 0.4858 | 2.9385 | 0.66 | 0.6644 | 0.1809 | 0.1396 |
| 0.0478 | 73.0 | 1825 | 1.4577 | 0.65 | 0.4857 | 2.9374 | 0.65 | 0.6543 | 0.1715 | 0.1403 |
| 0.0478 | 74.0 | 1850 | 1.4578 | 0.6525 | 0.4857 | 2.9381 | 0.6525 | 0.6565 | 0.1748 | 0.1401 |
| 0.0478 | 75.0 | 1875 | 1.4583 | 0.65 | 0.4860 | 2.9371 | 0.65 | 0.6544 | 0.1661 | 0.1402 |
| 0.0478 | 76.0 | 1900 | 1.4582 | 0.65 | 0.4859 | 2.9369 | 0.65 | 0.6544 | 0.1760 | 0.1402 |
| 0.0478 | 77.0 | 1925 | 1.4585 | 0.65 | 0.4859 | 2.9367 | 0.65 | 0.6546 | 0.1609 | 0.1403 |
| 0.0478 | 78.0 | 1950 | 1.4580 | 0.65 | 0.4858 | 2.9372 | 0.65 | 0.6546 | 0.1626 | 0.1401 |
| 0.0478 | 79.0 | 1975 | 1.4578 | 0.6525 | 0.4857 | 2.9369 | 0.6525 | 0.6564 | 0.1706 | 0.1400 |
| 0.0457 | 80.0 | 2000 | 1.4584 | 0.6525 | 0.4859 | 2.9370 | 0.6525 | 0.6564 | 0.1712 | 0.1402 |
| 0.0457 | 81.0 | 2025 | 1.4587 | 0.6525 | 0.4860 | 2.9370 | 0.6525 | 0.6568 | 0.1631 | 0.1402 |
| 0.0457 | 82.0 | 2050 | 1.4584 | 0.6525 | 0.4859 | 2.9369 | 0.6525 | 0.6568 | 0.1631 | 0.1401 |
| 0.0457 | 83.0 | 2075 | 1.4581 | 0.65 | 0.4858 | 2.9369 | 0.65 | 0.6543 | 0.1703 | 0.1401 |
| 0.0457 | 84.0 | 2100 | 1.4581 | 0.6525 | 0.4858 | 2.9370 | 0.6525 | 0.6564 | 0.1588 | 0.1401 |
| 0.0457 | 85.0 | 2125 | 1.4582 | 0.6525 | 0.4858 | 2.9370 | 0.6525 | 0.6568 | 0.1723 | 0.1400 |
| 0.0457 | 86.0 | 2150 | 1.4582 | 0.6525 | 0.4858 | 2.9371 | 0.6525 | 0.6564 | 0.1724 | 0.1400 |
| 0.0457 | 87.0 | 2175 | 1.4582 | 0.6525 | 0.4858 | 2.9369 | 0.6525 | 0.6567 | 0.1720 | 0.1400 |
| 0.0457 | 88.0 | 2200 | 1.4582 | 0.6525 | 0.4858 | 2.9372 | 0.6525 | 0.6567 | 0.1606 | 0.1401 |
| 0.0457 | 89.0 | 2225 | 1.4583 | 0.6525 | 0.4858 | 2.9372 | 0.6525 | 0.6567 | 0.1665 | 0.1401 |
| 0.0457 | 90.0 | 2250 | 1.4583 | 0.6525 | 0.4857 | 2.9370 | 0.6525 | 0.6564 | 0.1688 | 0.1400 |
| 0.0457 | 91.0 | 2275 | 1.4583 | 0.6525 | 0.4858 | 2.9371 | 0.6525 | 0.6567 | 0.1695 | 0.1400 |
| 0.0457 | 92.0 | 2300 | 1.4583 | 0.655 | 0.4858 | 2.9372 | 0.655 | 0.6591 | 0.1660 | 0.1394 |
| 0.0457 | 93.0 | 2325 | 1.4583 | 0.6525 | 0.4857 | 2.9371 | 0.6525 | 0.6565 | 0.1645 | 0.1400 |
| 0.0457 | 94.0 | 2350 | 1.4583 | 0.6525 | 0.4858 | 2.9371 | 0.6525 | 0.6567 | 0.1665 | 0.1399 |
| 0.0457 | 95.0 | 2375 | 1.4583 | 0.6525 | 0.4858 | 2.9372 | 0.6525 | 0.6567 | 0.1704 | 0.1399 |
| 0.0457 | 96.0 | 2400 | 1.4583 | 0.655 | 0.4858 | 2.9372 | 0.655 | 0.6588 | 0.1660 | 0.1395 |
| 0.0457 | 97.0 | 2425 | 1.4582 | 0.6525 | 0.4857 | 2.9372 | 0.6525 | 0.6567 | 0.1704 | 0.1399 |
| 0.0457 | 98.0 | 2450 | 1.4582 | 0.655 | 0.4857 | 2.9372 | 0.655 | 0.6591 | 0.1679 | 0.1394 |
| 0.0457 | 99.0 | 2475 | 1.4583 | 0.6525 | 0.4857 | 2.9372 | 0.6525 | 0.6567 | 0.1704 | 0.1399 |
| 0.0456 | 100.0 | 2500 | 1.4583 | 0.655 | 0.4857 | 2.9372 | 0.655 | 0.6591 | 0.1679 | 0.1394 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.12.0
- Tokenizers 0.12.1
|
Chickenfish/Daytechillout | Chickenfish | 2023-07-11T03:27:47Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-11T03:26:57Z | ---
license: creativeml-openrail-m
---
|
SpringYung/dolly_with_10latex_v2 | SpringYung | 2023-07-11T03:17:46Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-11T03:17:08Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
shikras/shikra-7b-delta-v1-0708 | shikras | 2023-07-11T03:07:55Z | 58 | 3 | transformers | [
"transformers",
"pytorch",
"shikra",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-10T15:57:55Z | Shikra-7B-v1-0708: A frequently updated ckpt for Shikra-7B-v1
---
license: cc-by-nc-4.0
datasets: added A-OKVQA dataset for Multiple Choice Question format training |
sharpbai/Baichuan-13B-Base | sharpbai | 2023-07-11T02:46:16Z | 15 | 0 | transformers | [
"transformers",
"pytorch",
"baichuan",
"text-generation",
"custom_code",
"zh",
"en",
"arxiv:2104.09864",
"arxiv:2108.12409",
"arxiv:2009.03300",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-07-11T02:37:52Z | ---
language:
- zh
- en
pipeline_tag: text-generation
inference: false
---
# Baichuan-13B-Base
*The weight file is split into chunks with a size of 650MB for convenient and fast parallel downloads*
A 650MB split weight version of [baichuan-inc/Baichuan-13B-Base](https://huggingface.co/baichuan-inc/Baichuan-13B-Base)
The original model card is down below
-----------------------------------------
# Baichuan-13B-Base
<!-- Provide a quick summary of what the model is/does. -->
## 介绍
Baichuan-13B-Base为Baichuan-13B系列模型中的预训练版本,经过对齐后的模型可见[Baichuan-13B-Chat](https://huggingface.co/baichuan-inc/Baichuan-13B-Chat)。
[Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B) 是由百川智能继 [Baichuan-7B](https://github.com/baichuan-inc/baichuan-7B) 之后开发的包含 130 亿参数的开源可商用的大规模语言模型,在权威的中文和英文 benchmark 上均取得同尺寸最好的效果。本次发布包含有预训练 ([Baichuan-13B-Base](https://huggingface.co/baichuan-inc/Baichuan-13B-Base)) 和对齐 ([Baichuan-13B-Chat](https://huggingface.co/baichuan-inc/Baichuan-13B-Chat)) 两个版本。Baichuan-13B 有如下几个特点:
1. **更大尺寸、更多数据**:Baichuan-13B 在 [Baichuan-7B](https://github.com/baichuan-inc/baichuan-7B) 的基础上进一步扩大参数量到 130 亿,并且在高质量的语料上训练了 1.4 万亿 tokens,超过 LLaMA-13B 40%,是当前开源 13B 尺寸下训练数据量最多的模型。支持中英双语,使用 ALiBi 位置编码,上下文窗口长度为 4096。
2. **同时开源预训练和对齐模型**:预训练模型是适用开发者的“基座”,而广大普通用户对有对话功能的对齐模型具有更强的需求。因此本次开源我们同时发布了对齐模型(Baichuan-13B-Chat),具有很强的对话能力,开箱即用,几行代码即可简单的部署。
3. **更高效的推理**:为了支持更广大用户的使用,我们本次同时开源了 int8 和 int4 的量化版本,相对非量化版本在几乎没有效果损失的情况下大大降低了部署的机器资源门槛,可以部署在如 Nvidia 3090 这样的消费级显卡上。
4. **开源免费可商用**:Baichuan-13B 不仅对学术研究完全开放,开发者也仅需邮件申请并获得官方商用许可后,即可以免费商用。
5.
Baichuan-13B-Base is the pre-training version in the Baichuan-13B series of models, and the aligned model can be found at [Baichuan-13B-Chat](https://huggingface.co/baichuan-inc/Baichuan-13B-Chat).
[Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B) is an open-source, commercially usable large-scale language model developed by Baichuan Intelligence, following [Baichuan-7B](https://github.com/baichuan-inc/baichuan-7B). With 13 billion parameters, it achieves the best performance in standard Chinese and English benchmarks among models of its size. This release includes two versions: pre-training (Baichuan-13B-Base) and alignment (Baichuan-13B-Chat). Baichuan-13B has the following features:
1. **Larger size, more data**: Baichuan-13B further expands the parameter volume to 13 billion based on [Baichuan-7B](https://github.com/baichuan-inc/baichuan-7B), and has trained 1.4 trillion tokens on high-quality corpora, exceeding LLaMA-13B by 40%. It is currently the model with the most training data in the open-source 13B size. It supports both Chinese and English, uses ALiBi position encoding, and has a context window length of 4096.
2. **Open-source pre-training and alignment models simultaneously**: The pre-training model is a "base" suitable for developers, while the general public has a stronger demand for alignment models with dialogue capabilities. Therefore, in this open-source release, we also released the alignment model (Baichuan-13B-Chat), which has strong dialogue capabilities and is ready to use. It can be easily deployed with just a few lines of code.
3. **More efficient inference**: To support a wider range of users, we have open-sourced the INT8 and INT4 quantized versions. The model can be conveniently deployed on consumer GPUs like the Nvidia 3090 with almost no performance loss.
4. **Open-source, free, and commercially usable**: Baichuan-13B is not only fully open to academic research, but developers can also use it for free commercially after applying for and receiving official commercial permission via email.
## 模型详情
### 模型描述
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** 百川智能(Baichuan Intelligent Technology)
- **Email**: [email protected]
- **Language(s) (NLP):** Chinese/English
- **License:** 【Community License for Baichuan-13B Model】([ZH](Baichuan-13B%20%E6%A8%A1%E5%9E%8B%E5%95%86%E7%94%A8%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf)|
[EN](Community%20License%20for%20Baichuan-13B%20Model.pdf))
**商业用途还需遵循(For commercial use additional):** 请通过上述Email联系申请书面授权。(Contact us via Email above to apply for written authorization.)
### 模型结构
<!-- Provide the basic links for the model. -->
整体模型基于Baichuan-7B,为了获得更好的推理性能,Baichuan-13B 使用了 ALiBi 线性偏置技术,相对于 Rotary Embedding 计算量更小,对推理性能有显著提升;与标准的 LLaMA-13B 相比,生成 2000 个 tokens 的平均推理速度 (tokens/s),实测提升 31.6%:
| Model | tokens/s |
|-------------|----------|
| LLaMA-13B | 19.4 |
| Baichuan-13B| 25.4 |
具体参数和见下表
| 模型名称 | 隐含层维度 | 层数 | 头数 |词表大小 | 总参数量 | 训练数据(tokens) | 位置编码 | 最大长度 |
|-------------------------|-------|------------|------------|-----------------|--------|--------|----------------|---------|
| Baichuan-7B | 4,096 | 32 | 32 | 64,000 | 7,000,559,616 | 1.2万亿 | [RoPE](https://arxiv.org/abs/2104.09864) | 4,096 |
| Baichuan-13B | 5,120 | 40 | 40 | 64,000 | 13,264,901,120 | 1.4万亿 | [ALiBi](https://arxiv.org/abs/2108.12409) | 4,096
The overall model is based on Baichuan-7B. In order to achieve better inference performance, Baichuan-13B uses ALiBi linear bias technology, which has a smaller computational load compared to Rotary Embedding, and significantly improves inference performance. Compared with the standard LLaMA-13B, the average inference speed (tokens/s) for generating 2000 tokens has been tested to increase by 31.6%:
| Model | tokens/s |
|-------------|----------|
| LLaMA-13B | 19.4 |
| Baichuan-13B| 25.4 |
The specific parameters are as follows:
| Model Name | Hidden Size | Num Layers | Num Attention Heads |Vocab Size | Total Params | Training Dats(tokens) | Position Embedding | Max Length |
|-------------------------|-------|------------|------------|-----------------|--------|--------|----------------|---------|
| Baichuan-7B | 4,096 | 32 | 32 | 64,000 | 7,000,559,616 | 1.2万亿 | [RoPE](https://arxiv.org/abs/2104.09864) | 4,096 |
| Baichuan-13B | 5,120 | 40 | 40 | 64,000 | 13,264,901,120 | 1.4万亿 | [ALiBi](https://arxiv.org/abs/2108.12409) | 4,096
### 免责声明
我们在此声明,我们的开发团队并未基于 Baichuan-13B 模型开发任何应用,无论是在 iOS、Android、网页或任何其他平台。我们强烈呼吁所有使用者,不要利用 Baichuan-13B 模型进行任何危害国家社会安全或违法的活动。另外,我们也要求使用者不要将 Baichuan-13B 模型用于未经适当安全审查和备案的互联网服务。我们希望所有的使用者都能遵守这个原则,确保科技的发展能在规范和合法的环境下进行。
我们已经尽我们所能,来确保模型训练过程中使用的数据的合规性。然而,尽管我们已经做出了巨大的努力,但由于模型和数据的复杂性,仍有可能存在一些无法预见的问题。因此,如果由于使用 Baichuan-13B 开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。
We hereby declare that our development team has not developed any applications based on the Baichuan-13B model, whether on iOS, Android, the web, or any other platform. We strongly urge all users not to use the Baichuan-13B model for any activities that harm national social security or are illegal. In addition, we also ask users not to use the Baichuan-13B model for internet services that have not undergone appropriate security review and filing. We hope that all users will adhere to this principle to ensure that technological development takes place in a regulated and legal environment.
We have done our utmost to ensure the compliance of the data used in the model training process. However, despite our great efforts, due to the complexity of the model and data, there may still be some unforeseen issues. Therefore, we will not take any responsibility for any issues arising from the use of the Baichuan-13B open-source model, including but not limited to data security issues, public opinion risks, or any risks and problems arising from the model being misled, misused, disseminated, or improperly exploited.
## 训练详情
训练具体设置参见[Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B)。
For specific training settings, please refer to [Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B).
## 测评结果
### [C-Eval](https://cevalbenchmark.com/index.html#home)
| Model 5-shot | STEM | Social Sciences | Humanities | Others | Average |
|-------------------------|:-----:|:---------------:|:----------:|:------:|:-------:|
| Baichuan-7B | 38.2 | 52.0 | 46.2 | 39.3 | 42.8 |
| Chinese-Alpaca-Plus-13B | 35.2 | 45.6 | 40.0 | 38.2 | 38.8 |
| Chinese-LLaMA-Plus-13B | 30.3 | 38.0 | 32.9 | 29.1 | 32.1 |
| Ziya-LLaMA-13B-Pretrain | 27.6 | 34.4 | 32.0 | 28.6 | 30.0 |
| LLaMA-13B | 27.0 | 33.6 | 27.7 | 27.6 | 28.5 |
| moss-moon-003-base (16B)| 27.0 | 29.1 | 27.2 | 26.9 | 27.4 |
| vicuna-13B | 22.8 | 24.8 | 22.3 | 18.5 | 22.2 |
| **Baichuan-13B-Base** | **45.9** | **63.5** | **57.2** | **49.3** | **52.4** |
| **Baichuan-13B-Chat** | **43.7** | **64.6** | **56.2** | **49.2** | **51.5** |
### [MMLU](https://arxiv.org/abs/2009.03300)
| Model 5-shot | STEM | Social Sciences | Humanities | Others | Average |
|-------------------------|:-----:|:---------------:|:----------:|:------:|:-------:|
| LLaMA-13B | 36.1 | 53.0 | 44.0 | 52.8 | 46.3 |
| Chinese-Alpaca-Plus-13B | 36.9 | 48.9 | 40.5 | 50.5 | 43.9 |
| Ziya-LLaMA-13B-Pretrain | 35.6 | 47.6 | 40.1 | 49.4 | 42.9 |
| Baichuan-7B | 35.6 | 48.9 | 38.4 | 48.1 | 42.3 |
| Chinese-LLaMA-Plus-13B | 33.1 | 42.8 | 37.0 | 44.6 | 39.2 |
| vicuna-13B | 24.2 | 24.1 | 24.6 | 26.8 | 24.9 |
| moss-moon-003-base (16B)| 22.4 | 22.8 | 24.2 | 24.4 | 23.6 |
| **Baichuan-13B-Base** | **41.6** | **60.9** | **47.4** | **58.5** | **51.6** |
| **Baichuan-13B-Chat** | **40.9** | **60.9** | **48.8** | **59.0** | **52.1** |
> 说明:我们采用了 MMLU 官方的[评测方案](https://github.com/hendrycks/test)。
### [CMMLU](https://github.com/haonan-li/CMMLU)
| Model 5-shot | STEM | Humanities | Social Sciences | Others | China Specific | Average |
|-------------------------|:-----:|:----------:|:---------------:|:------:|:--------------:|:-------:|
| Baichuan-7B | 34.4 | 47.5 | 47.6 | 46.6 | 44.3 | 44.0 |
| Chinese-Alpaca-Plus-13B | 29.8 | 33.4 | 33.2 | 37.9 | 32.1 | 33.4 |
| Chinese-LLaMA-Plus-13B | 28.1 | 33.1 | 35.4 | 35.1 | 33.5 | 33.0 |
| Ziya-LLaMA-13B-Pretrain | 29.0 | 30.7 | 33.8 | 34.4 | 31.9 | 32.1 |
| LLaMA-13B | 29.2 | 30.8 | 31.6 | 33.0 | 30.5 | 31.2 |
| moss-moon-003-base (16B)| 27.2 | 30.4 | 28.8 | 32.6 | 28.7 | 29.6 |
| vicuna-13B | 24.0 | 25.4 | 25.3 | 25.0 | 25.0 | 24.9 |
| **Baichuan-13B-Base** | **41.7** | **61.1** | **59.8** | **59.0** | **56.4** | **55.3** |
| **Baichuan-13B-Chat** | **42.8** | **62.6** | **59.7** | **59.0** | **56.1** | **55.8** |
> 说明:CMMLU 是一个综合性的中文评估基准,专门用于评估语言模型在中文语境下的知识和推理能力。我们采用了其官方的[评测方案](https://github.com/haonan-li/CMMLU)。
## 微信群组

|
alex2awesome/source-role-model | alex2awesome | 2023-07-11T02:46:00Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2023-07-11T02:14:08Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: source-role-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# source-role-model
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5543
- F1: 0.5814
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.12 | 100 | 1.0000 | 0.3391 |
| No log | 0.25 | 200 | 0.8371 | 0.5055 |
| No log | 0.37 | 300 | 0.8684 | 0.5019 |
| No log | 0.49 | 400 | 0.8668 | 0.5208 |
| 0.9644 | 0.62 | 500 | 0.8473 | 0.5422 |
| 0.9644 | 0.74 | 600 | 0.8852 | 0.4956 |
| 0.9644 | 0.86 | 700 | 0.8368 | 0.5124 |
| 0.9644 | 0.99 | 800 | 0.7913 | 0.5848 |
| 0.9644 | 1.11 | 900 | 1.0570 | 0.4950 |
| 0.8375 | 1.23 | 1000 | 0.9402 | 0.5280 |
| 0.8375 | 1.35 | 1100 | 0.8023 | 0.5084 |
| 0.8375 | 1.48 | 1200 | 0.9299 | 0.4807 |
| 0.8375 | 1.6 | 1300 | 0.9661 | 0.5194 |
| 0.8375 | 1.72 | 1400 | 0.8014 | 0.6016 |
| 0.8149 | 1.85 | 1500 | 0.8608 | 0.6105 |
| 0.8149 | 1.97 | 1600 | 0.9195 | 0.5741 |
| 0.8149 | 2.09 | 1700 | 1.2378 | 0.5964 |
| 0.8149 | 2.22 | 1800 | 1.0415 | 0.5902 |
| 0.8149 | 2.34 | 1900 | 1.0499 | 0.5526 |
| 0.6932 | 2.46 | 2000 | 1.0600 | 0.5832 |
| 0.6932 | 2.59 | 2100 | 0.9368 | 0.6074 |
| 0.6932 | 2.71 | 2200 | 1.0872 | 0.6270 |
| 0.6932 | 2.83 | 2300 | 1.0912 | 0.5707 |
| 0.6932 | 2.96 | 2400 | 0.8815 | 0.5602 |
| 0.6214 | 3.08 | 2500 | 1.1650 | 0.5993 |
| 0.6214 | 3.2 | 2600 | 1.4485 | 0.5821 |
| 0.6214 | 3.33 | 2700 | 1.5382 | 0.5775 |
| 0.6214 | 3.45 | 2800 | 1.3999 | 0.5696 |
| 0.6214 | 3.57 | 2900 | 1.3702 | 0.6114 |
| 0.5686 | 3.69 | 3000 | 1.3840 | 0.5635 |
| 0.5686 | 3.82 | 3100 | 1.3547 | 0.5403 |
| 0.5686 | 3.94 | 3200 | 1.0283 | 0.5723 |
| 0.5686 | 4.06 | 3300 | 1.3593 | 0.6242 |
| 0.5686 | 4.19 | 3400 | 1.5985 | 0.6004 |
| 0.4807 | 4.31 | 3500 | 1.5351 | 0.6177 |
| 0.4807 | 4.43 | 3600 | 1.4109 | 0.5779 |
| 0.4807 | 4.56 | 3700 | 1.6972 | 0.5637 |
| 0.4807 | 4.68 | 3800 | 1.5336 | 0.6047 |
| 0.4807 | 4.8 | 3900 | 1.7811 | 0.5909 |
| 0.4387 | 4.93 | 4000 | 1.5862 | 0.5869 |
| 0.4387 | 5.05 | 4100 | 1.7106 | 0.5637 |
| 0.4387 | 5.17 | 4200 | 1.5251 | 0.5624 |
| 0.4387 | 5.3 | 4300 | 1.5519 | 0.5944 |
| 0.4387 | 5.42 | 4400 | 1.7315 | 0.5908 |
| 0.3219 | 5.54 | 4500 | 1.7588 | 0.6015 |
| 0.3219 | 5.67 | 4600 | 1.9277 | 0.5635 |
| 0.3219 | 5.79 | 4700 | 1.7663 | 0.5891 |
| 0.3219 | 5.91 | 4800 | 1.8401 | 0.5917 |
| 0.3219 | 6.03 | 4900 | 2.0516 | 0.5845 |
| 0.2311 | 6.16 | 5000 | 2.0510 | 0.6166 |
| 0.2311 | 6.28 | 5100 | 2.1673 | 0.5732 |
| 0.2311 | 6.4 | 5200 | 2.0931 | 0.5819 |
| 0.2311 | 6.53 | 5300 | 2.2803 | 0.5961 |
| 0.2311 | 6.65 | 5400 | 1.9985 | 0.6010 |
| 0.1669 | 6.77 | 5500 | 2.1742 | 0.5664 |
| 0.1669 | 6.9 | 5600 | 2.1021 | 0.5732 |
| 0.1669 | 7.02 | 5700 | 2.2043 | 0.5641 |
| 0.1669 | 7.14 | 5800 | 2.2018 | 0.5837 |
| 0.1669 | 7.27 | 5900 | 2.3575 | 0.5721 |
| 0.1698 | 7.39 | 6000 | 2.4663 | 0.5662 |
| 0.1698 | 7.51 | 6100 | 2.2658 | 0.5851 |
| 0.1698 | 7.64 | 6200 | 2.1585 | 0.5676 |
| 0.1698 | 7.76 | 6300 | 2.1755 | 0.5774 |
| 0.1698 | 7.88 | 6400 | 2.2680 | 0.5696 |
| 0.1378 | 8.0 | 6500 | 2.3505 | 0.5615 |
| 0.1378 | 8.13 | 6600 | 2.2773 | 0.5705 |
| 0.1378 | 8.25 | 6700 | 2.3112 | 0.5662 |
| 0.1378 | 8.37 | 6800 | 2.4572 | 0.5679 |
| 0.1378 | 8.5 | 6900 | 2.4642 | 0.5766 |
| 0.0756 | 8.62 | 7000 | 2.4643 | 0.5885 |
| 0.0756 | 8.74 | 7100 | 2.5096 | 0.5779 |
| 0.0756 | 8.87 | 7200 | 2.4261 | 0.5789 |
| 0.0756 | 8.99 | 7300 | 2.3973 | 0.5757 |
| 0.0756 | 9.11 | 7400 | 2.4137 | 0.5906 |
| 0.0842 | 9.24 | 7500 | 2.4577 | 0.5844 |
| 0.0842 | 9.36 | 7600 | 2.5034 | 0.5840 |
| 0.0842 | 9.48 | 7700 | 2.5176 | 0.5810 |
| 0.0842 | 9.61 | 7800 | 2.5240 | 0.5852 |
| 0.0842 | 9.73 | 7900 | 2.5141 | 0.5824 |
| 0.0634 | 9.85 | 8000 | 2.5482 | 0.5814 |
| 0.0634 | 9.98 | 8100 | 2.5543 | 0.5814 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
vimonteglione/ppo-Huggy | vimonteglione | 2023-07-11T02:42:10Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-07-11T02:42:00Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: vimonteglione/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
NasimB/gpt2-concat-cbt-mod-formatting-iorder-rarity-all-4k | NasimB | 2023-07-11T02:39:48Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-11T00:45:49Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-cbt-mod-formatting-iorder-rarity-all-4k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-cbt-mod-formatting-iorder-rarity-all-4k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3158
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.6962 | 0.29 | 500 | 5.6482 |
| 5.3352 | 0.59 | 1000 | 5.2168 |
| 4.9963 | 0.88 | 1500 | 4.9671 |
| 4.7147 | 1.17 | 2000 | 4.8164 |
| 4.5508 | 1.46 | 2500 | 4.6852 |
| 4.4503 | 1.76 | 3000 | 4.5766 |
| 4.3233 | 2.05 | 3500 | 4.4995 |
| 4.1239 | 2.34 | 4000 | 4.4513 |
| 4.0934 | 2.63 | 4500 | 4.3905 |
| 4.0645 | 2.93 | 5000 | 4.3376 |
| 3.8538 | 3.22 | 5500 | 4.3338 |
| 3.7937 | 3.51 | 6000 | 4.3034 |
| 3.781 | 3.8 | 6500 | 4.2718 |
| 3.6821 | 4.1 | 7000 | 4.2702 |
| 3.5082 | 4.39 | 7500 | 4.2633 |
| 3.5078 | 4.68 | 8000 | 4.2471 |
| 3.4936 | 4.97 | 8500 | 4.2346 |
| 3.34 | 5.27 | 9000 | 4.2492 |
| 3.3145 | 5.56 | 9500 | 4.2471 |
| 3.315 | 5.85 | 10000 | 4.2463 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
alex2awesome/source-affiliation-model | alex2awesome | 2023-07-11T02:37:57Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2023-07-10T23:11:23Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: source-affiliation-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# source-affiliation-model
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3321
- F1: 0.5348
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.12 | 100 | 1.4535 | 0.2435 |
| No log | 0.25 | 200 | 1.3128 | 0.3899 |
| No log | 0.37 | 300 | 1.2888 | 0.4413 |
| No log | 0.49 | 400 | 1.1560 | 0.4614 |
| 1.4848 | 0.62 | 500 | 1.0988 | 0.4477 |
| 1.4848 | 0.74 | 600 | 1.1211 | 0.4583 |
| 1.4848 | 0.86 | 700 | 1.1152 | 0.4693 |
| 1.4848 | 0.99 | 800 | 1.0176 | 0.5018 |
| 1.4848 | 1.11 | 900 | 1.0942 | 0.4774 |
| 1.1019 | 1.23 | 1000 | 1.1785 | 0.5119 |
| 1.1019 | 1.35 | 1100 | 1.0751 | 0.4797 |
| 1.1019 | 1.48 | 1200 | 1.0759 | 0.5206 |
| 1.1019 | 1.6 | 1300 | 1.0756 | 0.5231 |
| 1.1019 | 1.72 | 1400 | 1.1329 | 0.4547 |
| 0.9431 | 1.85 | 1500 | 1.0617 | 0.4852 |
| 0.9431 | 1.97 | 1600 | 1.1046 | 0.5254 |
| 0.9431 | 2.09 | 1700 | 1.2489 | 0.5069 |
| 0.9431 | 2.22 | 1800 | 1.2113 | 0.5363 |
| 0.9431 | 2.34 | 1900 | 1.1782 | 0.5546 |
| 0.7589 | 2.46 | 2000 | 1.0453 | 0.5862 |
| 0.7589 | 2.59 | 2100 | 1.0810 | 0.5223 |
| 0.7589 | 2.71 | 2200 | 1.1470 | 0.5872 |
| 0.7589 | 2.83 | 2300 | 1.1522 | 0.5553 |
| 0.7589 | 2.96 | 2400 | 1.0712 | 0.6273 |
| 0.6875 | 3.08 | 2500 | 1.3458 | 0.5768 |
| 0.6875 | 3.2 | 2600 | 1.7052 | 0.5491 |
| 0.6875 | 3.33 | 2700 | 1.5080 | 0.6582 |
| 0.6875 | 3.45 | 2800 | 1.5851 | 0.5965 |
| 0.6875 | 3.57 | 2900 | 1.4771 | 0.5691 |
| 0.5391 | 3.69 | 3000 | 1.6717 | 0.5350 |
| 0.5391 | 3.82 | 3100 | 1.5607 | 0.5448 |
| 0.5391 | 3.94 | 3200 | 1.5464 | 0.6062 |
| 0.5391 | 4.06 | 3300 | 1.7645 | 0.5755 |
| 0.5391 | 4.19 | 3400 | 1.6715 | 0.5504 |
| 0.4928 | 4.31 | 3500 | 1.7604 | 0.5626 |
| 0.4928 | 4.43 | 3600 | 1.8984 | 0.5142 |
| 0.4928 | 4.56 | 3700 | 1.8012 | 0.5763 |
| 0.4928 | 4.68 | 3800 | 1.7107 | 0.5671 |
| 0.4928 | 4.8 | 3900 | 1.7697 | 0.5598 |
| 0.4233 | 4.93 | 4000 | 1.6296 | 0.6084 |
| 0.4233 | 5.05 | 4100 | 2.0418 | 0.5343 |
| 0.4233 | 5.17 | 4200 | 1.8203 | 0.5526 |
| 0.4233 | 5.3 | 4300 | 1.9760 | 0.5292 |
| 0.4233 | 5.42 | 4400 | 2.0136 | 0.5153 |
| 0.2518 | 5.54 | 4500 | 2.0137 | 0.5121 |
| 0.2518 | 5.67 | 4600 | 2.0053 | 0.5257 |
| 0.2518 | 5.79 | 4700 | 1.9539 | 0.5423 |
| 0.2518 | 5.91 | 4800 | 2.0159 | 0.5686 |
| 0.2518 | 6.03 | 4900 | 2.0411 | 0.5817 |
| 0.2234 | 6.16 | 5000 | 2.0025 | 0.5780 |
| 0.2234 | 6.28 | 5100 | 2.1189 | 0.5413 |
| 0.2234 | 6.4 | 5200 | 2.1936 | 0.5628 |
| 0.2234 | 6.53 | 5300 | 2.1825 | 0.5210 |
| 0.2234 | 6.65 | 5400 | 2.0767 | 0.5471 |
| 0.1829 | 6.77 | 5500 | 1.9747 | 0.5587 |
| 0.1829 | 6.9 | 5600 | 2.1182 | 0.5847 |
| 0.1829 | 7.02 | 5700 | 2.1597 | 0.5437 |
| 0.1829 | 7.14 | 5800 | 2.0307 | 0.5629 |
| 0.1829 | 7.27 | 5900 | 2.0912 | 0.5450 |
| 0.1226 | 7.39 | 6000 | 2.2383 | 0.5379 |
| 0.1226 | 7.51 | 6100 | 2.2311 | 0.5834 |
| 0.1226 | 7.64 | 6200 | 2.2456 | 0.5438 |
| 0.1226 | 7.76 | 6300 | 2.2423 | 0.5860 |
| 0.1226 | 7.88 | 6400 | 2.2922 | 0.5245 |
| 0.0883 | 8.0 | 6500 | 2.3304 | 0.5650 |
| 0.0883 | 8.13 | 6600 | 2.3929 | 0.5288 |
| 0.0883 | 8.25 | 6700 | 2.3928 | 0.5344 |
| 0.0883 | 8.37 | 6800 | 2.3854 | 0.5266 |
| 0.0883 | 8.5 | 6900 | 2.4275 | 0.5339 |
| 0.044 | 8.62 | 7000 | 2.3929 | 0.5380 |
| 0.044 | 8.74 | 7100 | 2.3587 | 0.5339 |
| 0.044 | 8.87 | 7200 | 2.3372 | 0.5423 |
| 0.044 | 8.99 | 7300 | 2.3488 | 0.5424 |
| 0.044 | 9.11 | 7400 | 2.3543 | 0.5818 |
| 0.0558 | 9.24 | 7500 | 2.3397 | 0.5554 |
| 0.0558 | 9.36 | 7600 | 2.3255 | 0.5394 |
| 0.0558 | 9.48 | 7700 | 2.3184 | 0.5557 |
| 0.0558 | 9.61 | 7800 | 2.3293 | 0.5669 |
| 0.0558 | 9.73 | 7900 | 2.3358 | 0.5666 |
| 0.0323 | 9.85 | 8000 | 2.3307 | 0.5344 |
| 0.0323 | 9.98 | 8100 | 2.3321 | 0.5348 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
LuisFelipe11/distilbert-base-uncased-finetuned-cola | LuisFelipe11 | 2023-07-11T02:28:47Z | 61 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-11T01:00:55Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: LuisFelipe11/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# LuisFelipe11/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1911
- Validation Loss: 0.5394
- Train Matthews Correlation: 0.5262
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5151 | 0.4592 | 0.4583 | 0 |
| 0.3189 | 0.4731 | 0.5132 | 1 |
| 0.1911 | 0.5394 | 0.5262 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
RavenFangsk/chronoborous-33B-GPTQ | RavenFangsk | 2023-07-11T02:28:20Z | 5 | 0 | transformers | [
"transformers",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-10T03:26:46Z | Auto-GPTQ'd version of https://huggingface.co/Henk717/chronoboros-33B |
alex2awesome/source-type-model | alex2awesome | 2023-07-11T02:27:55Z | 168 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2023-07-10T21:32:31Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: source-type-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# source-type-model
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6271
- F1: 0.6772
Classifies the following tags:
```
'Cannot Determine'
'Report/Document'
'Named Individual'
'Unnamed Individual'
'Database'
'Unnamed Group'
'Named Group'
'Vote/Poll'
```
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.12 | 100 | 0.7192 | 0.3792 |
| No log | 0.25 | 200 | 0.7716 | 0.4005 |
| No log | 0.37 | 300 | 0.7565 | 0.5297 |
| No log | 0.49 | 400 | 0.5788 | 0.5806 |
| 0.8223 | 0.62 | 500 | 0.5402 | 0.5933 |
| 0.8223 | 0.74 | 600 | 0.5032 | 0.6666 |
| 0.8223 | 0.86 | 700 | 0.4658 | 0.6754 |
| 0.8223 | 0.99 | 800 | 0.5359 | 0.6441 |
| 0.8223 | 1.11 | 900 | 0.5295 | 0.6442 |
| 0.6009 | 1.23 | 1000 | 0.6077 | 0.6597 |
| 0.6009 | 1.35 | 1100 | 0.6169 | 0.6360 |
| 0.6009 | 1.48 | 1200 | 0.6014 | 0.6277 |
| 0.6009 | 1.6 | 1300 | 0.6382 | 0.6327 |
| 0.6009 | 1.72 | 1400 | 0.5226 | 0.6787 |
| 0.5644 | 1.85 | 1500 | 0.4922 | 0.6485 |
| 0.5644 | 1.97 | 1600 | 0.6181 | 0.6517 |
| 0.5644 | 2.09 | 1700 | 0.6106 | 0.6781 |
| 0.5644 | 2.22 | 1800 | 0.6652 | 0.6760 |
| 0.5644 | 2.34 | 1900 | 0.6252 | 0.6739 |
| 0.3299 | 2.46 | 2000 | 0.6620 | 0.6606 |
| 0.3299 | 2.59 | 2100 | 0.6317 | 0.6772 |
| 0.3299 | 2.71 | 2200 | 0.6170 | 0.6726 |
| 0.3299 | 2.83 | 2300 | 0.6400 | 0.6773 |
| 0.3299 | 2.96 | 2400 | 0.6271 | 0.6772 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
pokorpohon/Fotoangel | pokorpohon | 2023-07-11T02:26:03Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-11T02:18:29Z | ---
license: creativeml-openrail-m
---
|
nickrosh/Evol-Replit-v1 | nickrosh | 2023-07-11T02:25:13Z | 10 | 8 | transformers | [
"transformers",
"pytorch",
"mpt",
"text-generation",
"custom_code",
"dataset:nickrosh/Evol-Instruct-Code-80k-v1",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-08T04:24:58Z | ---
license: cc-by-sa-4.0
datasets:
- nickrosh/Evol-Instruct-Code-80k-v1
---
This model uses the Evol-Instruct-Code-80k-v1 dataset generated using the [Evol-Teacher](https://github.com/nickrosh/evol-teacher) repo. Currently, WizardCoder is one the most performant Code Generation models, being beaten only by ChatGPT. This takes the Code Alpaca 20k dataset and evolves each instruction through a randomly chosen evolution prompt to increase instruction complexity. These prompts range from increase time/space complexity, to increasing requirements, to adding erroneus code to improve robustness, etc. This is done three times with pruning and post processing to remove unwanted instructions and responses. The iterative addition of more complexity gives higher quality and more in-depth instructions than what is ususally generated in Alpaca methods. This, like in the case of WizardCoder and WizardLM, can lead to strong performance that gets very close to RLHF model performance.
This model uses [ReplitLM](https://huggingface.co/replit/replit-code-v1-3b) fine tuned with the following parameters:
```bash
--model_name_or_path replit/replit-code-v1-3b \
--data_path ./data/EvolInstruct-Code-80k/EvolInstruct-Code-80k.json \
--output_dir ./checkpoints \
--num_train_epochs 3 \
--per_device_train_batch_size 1 \
--gradient_accumulation_steps 8 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 50 \
--save_total_limit 2 \
--learning_rate 2e-5 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--model_max_length 2000 \
--bf16 True \
--tf32 True
``` |
jordyvl/vit-small_rvl_cdip_100_examples_per_class_kd_CEKD_t1.5_a0.9 | jordyvl | 2023-07-11T02:14:34Z | 163 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-11T01:01:03Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-small_rvl_cdip_100_examples_per_class_kd_CEKD_t1.5_a0.9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-small_rvl_cdip_100_examples_per_class_kd_CEKD_t1.5_a0.9
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2366
- Accuracy: 0.63
- Brier Loss: 0.5035
- Nll: 2.8588
- F1 Micro: 0.63
- F1 Macro: 0.6311
- Ece: 0.1649
- Aurc: 0.1472
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 25 | 2.8887 | 0.1225 | 0.9306 | 15.9457 | 0.1225 | 0.1226 | 0.1434 | 0.8620 |
| No log | 2.0 | 50 | 2.2120 | 0.3775 | 0.7577 | 9.7500 | 0.3775 | 0.3483 | 0.1992 | 0.3776 |
| No log | 3.0 | 75 | 1.7681 | 0.495 | 0.6387 | 5.6935 | 0.495 | 0.4838 | 0.1885 | 0.2491 |
| No log | 4.0 | 100 | 1.6420 | 0.5225 | 0.6038 | 5.2427 | 0.5225 | 0.5242 | 0.1757 | 0.2301 |
| No log | 5.0 | 125 | 1.5877 | 0.545 | 0.5986 | 4.6187 | 0.545 | 0.5282 | 0.1808 | 0.2248 |
| No log | 6.0 | 150 | 1.6460 | 0.5125 | 0.6162 | 3.9942 | 0.5125 | 0.5060 | 0.1962 | 0.2295 |
| No log | 7.0 | 175 | 1.8436 | 0.5125 | 0.6538 | 4.1740 | 0.5125 | 0.4932 | 0.2299 | 0.2451 |
| No log | 8.0 | 200 | 1.8205 | 0.545 | 0.6453 | 5.0752 | 0.545 | 0.5234 | 0.2057 | 0.2432 |
| No log | 9.0 | 225 | 1.7399 | 0.55 | 0.6260 | 4.5896 | 0.55 | 0.5460 | 0.2057 | 0.2258 |
| No log | 10.0 | 250 | 1.8559 | 0.55 | 0.6521 | 5.0532 | 0.55 | 0.5368 | 0.2209 | 0.2560 |
| No log | 11.0 | 275 | 1.8636 | 0.5625 | 0.6488 | 4.6642 | 0.5625 | 0.5544 | 0.2335 | 0.2187 |
| No log | 12.0 | 300 | 1.7461 | 0.55 | 0.6356 | 4.1298 | 0.55 | 0.5638 | 0.2047 | 0.2313 |
| No log | 13.0 | 325 | 1.7468 | 0.5625 | 0.6281 | 4.5451 | 0.5625 | 0.5570 | 0.2224 | 0.2214 |
| No log | 14.0 | 350 | 1.9616 | 0.545 | 0.6884 | 3.7999 | 0.545 | 0.5484 | 0.2691 | 0.2624 |
| No log | 15.0 | 375 | 2.0977 | 0.5175 | 0.7138 | 4.3792 | 0.5175 | 0.5055 | 0.2658 | 0.2917 |
| No log | 16.0 | 400 | 2.0238 | 0.5275 | 0.6896 | 4.5299 | 0.5275 | 0.5177 | 0.2664 | 0.2603 |
| No log | 17.0 | 425 | 1.8687 | 0.535 | 0.6534 | 3.7356 | 0.535 | 0.5388 | 0.2490 | 0.2448 |
| No log | 18.0 | 450 | 1.8210 | 0.5575 | 0.6492 | 4.3823 | 0.5575 | 0.5537 | 0.2533 | 0.2268 |
| No log | 19.0 | 475 | 1.7610 | 0.555 | 0.6325 | 3.9697 | 0.555 | 0.5503 | 0.2292 | 0.2161 |
| 0.5398 | 20.0 | 500 | 1.7125 | 0.5825 | 0.6125 | 3.4176 | 0.5825 | 0.5731 | 0.2140 | 0.1859 |
| 0.5398 | 21.0 | 525 | 1.6296 | 0.5775 | 0.6163 | 3.6014 | 0.5775 | 0.5871 | 0.2236 | 0.2051 |
| 0.5398 | 22.0 | 550 | 1.5965 | 0.57 | 0.5908 | 3.7668 | 0.57 | 0.5712 | 0.2058 | 0.1883 |
| 0.5398 | 23.0 | 575 | 1.4828 | 0.5875 | 0.5646 | 3.7028 | 0.5875 | 0.5854 | 0.1944 | 0.1714 |
| 0.5398 | 24.0 | 600 | 1.3983 | 0.6075 | 0.5481 | 3.3608 | 0.6075 | 0.6107 | 0.1966 | 0.1628 |
| 0.5398 | 25.0 | 625 | 1.5241 | 0.5925 | 0.5866 | 3.3669 | 0.5925 | 0.6019 | 0.2069 | 0.1886 |
| 0.5398 | 26.0 | 650 | 1.5540 | 0.58 | 0.5780 | 3.5184 | 0.58 | 0.5710 | 0.2131 | 0.1857 |
| 0.5398 | 27.0 | 675 | 1.4653 | 0.6 | 0.5768 | 2.9877 | 0.6 | 0.6043 | 0.2166 | 0.1781 |
| 0.5398 | 28.0 | 700 | 1.4883 | 0.5925 | 0.5646 | 3.7789 | 0.5925 | 0.5910 | 0.2096 | 0.1746 |
| 0.5398 | 29.0 | 725 | 1.5738 | 0.59 | 0.5914 | 4.0558 | 0.59 | 0.5879 | 0.2150 | 0.1957 |
| 0.5398 | 30.0 | 750 | 1.4017 | 0.6025 | 0.5583 | 3.4791 | 0.6025 | 0.6023 | 0.2150 | 0.1752 |
| 0.5398 | 31.0 | 775 | 1.3500 | 0.61 | 0.5365 | 3.2560 | 0.61 | 0.6157 | 0.1988 | 0.1579 |
| 0.5398 | 32.0 | 800 | 1.2977 | 0.6375 | 0.5140 | 3.0503 | 0.6375 | 0.6395 | 0.1847 | 0.1534 |
| 0.5398 | 33.0 | 825 | 1.3471 | 0.6175 | 0.5406 | 3.1888 | 0.6175 | 0.6104 | 0.2077 | 0.1689 |
| 0.5398 | 34.0 | 850 | 1.2992 | 0.615 | 0.5219 | 2.8944 | 0.615 | 0.6191 | 0.1826 | 0.1574 |
| 0.5398 | 35.0 | 875 | 1.2733 | 0.6225 | 0.5124 | 2.9352 | 0.6225 | 0.6238 | 0.1588 | 0.1505 |
| 0.5398 | 36.0 | 900 | 1.2821 | 0.6175 | 0.5231 | 3.0142 | 0.6175 | 0.6169 | 0.1672 | 0.1553 |
| 0.5398 | 37.0 | 925 | 1.2819 | 0.61 | 0.5200 | 2.6874 | 0.61 | 0.6116 | 0.1847 | 0.1540 |
| 0.5398 | 38.0 | 950 | 1.2664 | 0.615 | 0.5145 | 2.9287 | 0.615 | 0.6159 | 0.1961 | 0.1528 |
| 0.5398 | 39.0 | 975 | 1.2584 | 0.6225 | 0.5134 | 3.0058 | 0.6225 | 0.6230 | 0.1747 | 0.1508 |
| 0.0507 | 40.0 | 1000 | 1.2562 | 0.615 | 0.5114 | 2.9269 | 0.615 | 0.6169 | 0.1815 | 0.1504 |
| 0.0507 | 41.0 | 1025 | 1.2525 | 0.6225 | 0.5101 | 2.9199 | 0.6225 | 0.6239 | 0.1770 | 0.1496 |
| 0.0507 | 42.0 | 1050 | 1.2573 | 0.62 | 0.5133 | 2.9195 | 0.62 | 0.6221 | 0.1824 | 0.1511 |
| 0.0507 | 43.0 | 1075 | 1.2536 | 0.6125 | 0.5131 | 2.9026 | 0.6125 | 0.6121 | 0.1820 | 0.1511 |
| 0.0507 | 44.0 | 1100 | 1.2543 | 0.6225 | 0.5109 | 3.0693 | 0.6225 | 0.6235 | 0.1647 | 0.1500 |
| 0.0507 | 45.0 | 1125 | 1.2526 | 0.6125 | 0.5117 | 2.9018 | 0.6125 | 0.6141 | 0.1788 | 0.1500 |
| 0.0507 | 46.0 | 1150 | 1.2432 | 0.615 | 0.5068 | 2.9042 | 0.615 | 0.6167 | 0.1762 | 0.1484 |
| 0.0507 | 47.0 | 1175 | 1.2485 | 0.6275 | 0.5098 | 2.8927 | 0.6275 | 0.6251 | 0.1590 | 0.1496 |
| 0.0507 | 48.0 | 1200 | 1.2576 | 0.6125 | 0.5140 | 2.8956 | 0.6125 | 0.6137 | 0.1824 | 0.1524 |
| 0.0507 | 49.0 | 1225 | 1.2468 | 0.62 | 0.5094 | 2.8918 | 0.62 | 0.6204 | 0.1832 | 0.1496 |
| 0.0507 | 50.0 | 1250 | 1.2479 | 0.6175 | 0.5102 | 2.8921 | 0.6175 | 0.6178 | 0.1706 | 0.1491 |
| 0.0507 | 51.0 | 1275 | 1.2393 | 0.6225 | 0.5057 | 2.8813 | 0.6225 | 0.6229 | 0.1784 | 0.1486 |
| 0.0507 | 52.0 | 1300 | 1.2463 | 0.6175 | 0.5085 | 2.8959 | 0.6175 | 0.6184 | 0.1669 | 0.1495 |
| 0.0507 | 53.0 | 1325 | 1.2391 | 0.62 | 0.5061 | 2.8828 | 0.62 | 0.6215 | 0.1803 | 0.1471 |
| 0.0507 | 54.0 | 1350 | 1.2538 | 0.6175 | 0.5121 | 2.8795 | 0.6175 | 0.6167 | 0.1680 | 0.1512 |
| 0.0507 | 55.0 | 1375 | 1.2407 | 0.625 | 0.5064 | 2.8830 | 0.625 | 0.6259 | 0.1842 | 0.1482 |
| 0.0507 | 56.0 | 1400 | 1.2488 | 0.62 | 0.5099 | 2.8769 | 0.62 | 0.6198 | 0.1568 | 0.1499 |
| 0.0507 | 57.0 | 1425 | 1.2402 | 0.625 | 0.5052 | 2.8778 | 0.625 | 0.6260 | 0.1616 | 0.1481 |
| 0.0507 | 58.0 | 1450 | 1.2457 | 0.625 | 0.5077 | 2.8786 | 0.625 | 0.6260 | 0.1759 | 0.1474 |
| 0.0507 | 59.0 | 1475 | 1.2430 | 0.6275 | 0.5073 | 2.8744 | 0.6275 | 0.6266 | 0.1652 | 0.1486 |
| 0.0319 | 60.0 | 1500 | 1.2399 | 0.625 | 0.5056 | 2.8767 | 0.625 | 0.6256 | 0.1701 | 0.1474 |
| 0.0319 | 61.0 | 1525 | 1.2460 | 0.63 | 0.5087 | 2.8758 | 0.63 | 0.6329 | 0.1865 | 0.1491 |
| 0.0319 | 62.0 | 1550 | 1.2410 | 0.6225 | 0.5058 | 2.8719 | 0.6225 | 0.6229 | 0.1752 | 0.1477 |
| 0.0319 | 63.0 | 1575 | 1.2418 | 0.63 | 0.5060 | 2.8746 | 0.63 | 0.6319 | 0.1692 | 0.1484 |
| 0.0319 | 64.0 | 1600 | 1.2424 | 0.6275 | 0.5069 | 2.8672 | 0.6275 | 0.6279 | 0.1903 | 0.1475 |
| 0.0319 | 65.0 | 1625 | 1.2413 | 0.63 | 0.5061 | 2.8747 | 0.63 | 0.6304 | 0.1737 | 0.1471 |
| 0.0319 | 66.0 | 1650 | 1.2385 | 0.6325 | 0.5039 | 2.8726 | 0.6325 | 0.6358 | 0.1792 | 0.1473 |
| 0.0319 | 67.0 | 1675 | 1.2368 | 0.625 | 0.5047 | 2.8661 | 0.625 | 0.6261 | 0.1843 | 0.1467 |
| 0.0319 | 68.0 | 1700 | 1.2370 | 0.6275 | 0.5039 | 2.8691 | 0.6275 | 0.6294 | 0.1724 | 0.1471 |
| 0.0319 | 69.0 | 1725 | 1.2382 | 0.63 | 0.5050 | 2.8659 | 0.63 | 0.6317 | 0.1698 | 0.1472 |
| 0.0319 | 70.0 | 1750 | 1.2396 | 0.6275 | 0.5051 | 2.8670 | 0.6275 | 0.6290 | 0.1790 | 0.1474 |
| 0.0319 | 71.0 | 1775 | 1.2378 | 0.625 | 0.5045 | 2.8637 | 0.625 | 0.6268 | 0.1742 | 0.1476 |
| 0.0319 | 72.0 | 1800 | 1.2360 | 0.625 | 0.5037 | 2.8669 | 0.625 | 0.6269 | 0.1778 | 0.1468 |
| 0.0319 | 73.0 | 1825 | 1.2390 | 0.63 | 0.5049 | 2.8638 | 0.63 | 0.6310 | 0.1711 | 0.1474 |
| 0.0319 | 74.0 | 1850 | 1.2372 | 0.625 | 0.5045 | 2.8640 | 0.625 | 0.6269 | 0.1817 | 0.1475 |
| 0.0319 | 75.0 | 1875 | 1.2375 | 0.63 | 0.5044 | 2.8640 | 0.63 | 0.6313 | 0.1703 | 0.1472 |
| 0.0319 | 76.0 | 1900 | 1.2372 | 0.6275 | 0.5041 | 2.8621 | 0.6275 | 0.6290 | 0.1794 | 0.1473 |
| 0.0319 | 77.0 | 1925 | 1.2374 | 0.63 | 0.5041 | 2.8629 | 0.63 | 0.6313 | 0.1722 | 0.1472 |
| 0.0319 | 78.0 | 1950 | 1.2367 | 0.6275 | 0.5039 | 2.8620 | 0.6275 | 0.6294 | 0.1704 | 0.1474 |
| 0.0319 | 79.0 | 1975 | 1.2371 | 0.6275 | 0.5039 | 2.8619 | 0.6275 | 0.6294 | 0.1639 | 0.1474 |
| 0.0314 | 80.0 | 2000 | 1.2372 | 0.63 | 0.5041 | 2.8612 | 0.63 | 0.6310 | 0.1750 | 0.1474 |
| 0.0314 | 81.0 | 2025 | 1.2368 | 0.63 | 0.5038 | 2.8613 | 0.63 | 0.6309 | 0.1648 | 0.1473 |
| 0.0314 | 82.0 | 2050 | 1.2370 | 0.63 | 0.5038 | 2.8607 | 0.63 | 0.6305 | 0.1782 | 0.1473 |
| 0.0314 | 83.0 | 2075 | 1.2368 | 0.63 | 0.5038 | 2.8609 | 0.63 | 0.6307 | 0.1686 | 0.1472 |
| 0.0314 | 84.0 | 2100 | 1.2368 | 0.63 | 0.5037 | 2.8603 | 0.63 | 0.6305 | 0.1667 | 0.1472 |
| 0.0314 | 85.0 | 2125 | 1.2366 | 0.63 | 0.5036 | 2.8601 | 0.63 | 0.6309 | 0.1686 | 0.1473 |
| 0.0314 | 86.0 | 2150 | 1.2367 | 0.6325 | 0.5037 | 2.8600 | 0.6325 | 0.6335 | 0.1751 | 0.1471 |
| 0.0314 | 87.0 | 2175 | 1.2369 | 0.63 | 0.5037 | 2.8598 | 0.63 | 0.6307 | 0.1730 | 0.1473 |
| 0.0314 | 88.0 | 2200 | 1.2367 | 0.63 | 0.5036 | 2.8595 | 0.63 | 0.6307 | 0.1657 | 0.1472 |
| 0.0314 | 89.0 | 2225 | 1.2366 | 0.63 | 0.5036 | 2.8597 | 0.63 | 0.6307 | 0.1680 | 0.1472 |
| 0.0314 | 90.0 | 2250 | 1.2366 | 0.63 | 0.5036 | 2.8594 | 0.63 | 0.6307 | 0.1580 | 0.1472 |
| 0.0314 | 91.0 | 2275 | 1.2366 | 0.63 | 0.5035 | 2.8593 | 0.63 | 0.6307 | 0.1677 | 0.1472 |
| 0.0314 | 92.0 | 2300 | 1.2367 | 0.63 | 0.5035 | 2.8593 | 0.63 | 0.6307 | 0.1616 | 0.1472 |
| 0.0314 | 93.0 | 2325 | 1.2366 | 0.63 | 0.5035 | 2.8590 | 0.63 | 0.6307 | 0.1625 | 0.1472 |
| 0.0314 | 94.0 | 2350 | 1.2366 | 0.6325 | 0.5035 | 2.8590 | 0.6325 | 0.6333 | 0.1586 | 0.1470 |
| 0.0314 | 95.0 | 2375 | 1.2366 | 0.63 | 0.5035 | 2.8591 | 0.63 | 0.6307 | 0.1580 | 0.1472 |
| 0.0314 | 96.0 | 2400 | 1.2366 | 0.63 | 0.5035 | 2.8589 | 0.63 | 0.6307 | 0.1695 | 0.1471 |
| 0.0314 | 97.0 | 2425 | 1.2366 | 0.63 | 0.5035 | 2.8589 | 0.63 | 0.6311 | 0.1648 | 0.1472 |
| 0.0314 | 98.0 | 2450 | 1.2366 | 0.63 | 0.5035 | 2.8588 | 0.63 | 0.6311 | 0.1695 | 0.1471 |
| 0.0314 | 99.0 | 2475 | 1.2366 | 0.6325 | 0.5035 | 2.8589 | 0.6325 | 0.6337 | 0.1724 | 0.1470 |
| 0.0312 | 100.0 | 2500 | 1.2366 | 0.63 | 0.5035 | 2.8588 | 0.63 | 0.6311 | 0.1649 | 0.1472 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.12.0
- Tokenizers 0.12.1
|
hafidikhsan/wav2vec2-large-xlsr-53-english-pronunciation-evaluation-aod-cut-oversampling-augmented | hafidikhsan | 2023-07-11T02:12:58Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-07-11T02:10:43Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: wav2vec2-large-xlsr-53-english-pronunciation-evaluation-aod-cut-oversampling-augmented
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-english-pronunciation-evaluation-aod-cut-oversampling-augmented
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0403
- Accuracy: 0.744
- F1: 0.7432
- Precision: 0.7436
- Recall: 0.744
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.8567 | 1.0 | 313 | 0.9539 | 0.5388 | 0.5159 | 0.5387 | 0.5388 |
| 0.665 | 2.0 | 626 | 0.7520 | 0.6512 | 0.6545 | 0.6625 | 0.6512 |
| 0.629 | 3.0 | 939 | 0.7775 | 0.7008 | 0.6980 | 0.6978 | 0.7008 |
| 0.4793 | 4.0 | 1252 | 0.8696 | 0.7268 | 0.7295 | 0.7365 | 0.7268 |
| 0.2273 | 5.0 | 1565 | 1.0403 | 0.744 | 0.7432 | 0.7436 | 0.744 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
zwtharry/PPO-rocket | zwtharry | 2023-07-11T02:09:34Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-11T02:09:13Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 234.64 +/- 40.86
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ALazcanoG/nominal-groups-recognition-bert-base-spanish-wwm-uncased | ALazcanoG | 2023-07-11T02:07:33Z | 123 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"es",
"dataset:ALazcanoG/spanish_nominal_groups_conll2003",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-07-11T00:57:11Z | ---
language:
- es
tags:
- generated_from_trainer
datasets:
- ALazcanoG/spanish_nominal_groups_conll2003
model-index:
- name: nominal-groups-recognition-bert-base-spanish-wwm-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nominal-groups-recognition-bert-base-spanish-wwm-uncased
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on the ALazcanoG/spanish_nominal_groups_conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2519
- Body Part Precision: 0.6984
- Body Part Recall: 0.7711
- Body Part F1: 0.7329
- Body Part Number: 1066
- Disease Precision: 0.7230
- Disease Recall: 0.7923
- Disease F1: 0.7561
- Disease Number: 2725
- Family Member Precision: 0.9592
- Family Member Recall: 0.8246
- Family Member F1: 0.8868
- Family Member Number: 57
- Medication Precision: 0.7593
- Medication Recall: 0.7625
- Medication F1: 0.7609
- Medication Number: 240
- Procedure Precision: 0.5439
- Procedure Recall: 0.6389
- Procedure F1: 0.5876
- Procedure Number: 853
- Overall Precision: 0.6885
- Overall Recall: 0.7602
- Overall F1: 0.7226
- Overall Accuracy: 0.9230
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 13
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Body Part Precision | Body Part Recall | Body Part F1 | Body Part Number | Disease Precision | Disease Recall | Disease F1 | Disease Number | Family Member Precision | Family Member Recall | Family Member F1 | Family Member Number | Medication Precision | Medication Recall | Medication F1 | Medication Number | Procedure Precision | Procedure Recall | Procedure F1 | Procedure Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|:--------------:|:-----------------------:|:--------------------:|:----------------:|:--------------------:|:--------------------:|:-----------------:|:-------------:|:-----------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.4144 | 1.0 | 703 | 0.2530 | 0.6907 | 0.6998 | 0.6952 | 1066 | 0.7309 | 0.7394 | 0.7351 | 2725 | 0.9565 | 0.7719 | 0.8544 | 57 | 0.7798 | 0.7083 | 0.7424 | 240 | 0.5502 | 0.5651 | 0.5575 | 853 | 0.6946 | 0.6997 | 0.6971 | 0.9199 |
| 0.2118 | 2.0 | 1406 | 0.2519 | 0.6984 | 0.7711 | 0.7329 | 1066 | 0.7230 | 0.7923 | 0.7561 | 2725 | 0.9592 | 0.8246 | 0.8868 | 57 | 0.7593 | 0.7625 | 0.7609 | 240 | 0.5439 | 0.6389 | 0.5876 | 853 | 0.6885 | 0.7602 | 0.7226 | 0.9230 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Jonathaniu/alpaca-bitcoin-tweets-sentiment-13b | Jonathaniu | 2023-07-11T01:35:04Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-10T03:01:20Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
casque/TemplarAssassinv0.2 | casque | 2023-07-11T01:29:41Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-11T01:26:51Z | ---
license: creativeml-openrail-m
---
|
liyingjian/Reinforce-policy-gradient | liyingjian | 2023-07-11T01:28:57Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-11T01:28:48Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-policy-gradient
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 403.00 +/- 194.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
jpherrerap/ner-roberta-es-clinical-trials-ner_v2 | jpherrerap | 2023-07-11T01:28:46Z | 106 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"token-classification",
"generated_from_trainer",
"es",
"dataset:jpherrerap/competencia2",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-07-11T01:23:09Z | ---
language:
- es
license: cc-by-nc-4.0
tags:
- generated_from_trainer
datasets:
- jpherrerap/competencia2
model-index:
- name: ner-roberta-es-clinical-trials-ner_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ner-roberta-es-clinical-trials-ner_v2
This model is a fine-tuned version of [lcampillos/roberta-es-clinical-trials-ner](https://huggingface.co/lcampillos/roberta-es-clinical-trials-ner) on the jpherrerap/competencia2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1542
- Body Part Precision: 0.0
- Body Part Recall: 0.0
- Body Part F1: 0.0
- Body Part Number: 0
- Disease Precision: 0.0
- Disease Recall: 0.0
- Disease F1: 0.0
- Disease Number: 0
- Medication Precision: 0.0
- Medication Recall: 0.0
- Medication F1: 0.0
- Medication Number: 0
- Procedure Precision: 0.0
- Procedure Recall: 0.0
- Procedure F1: 0.0
- Procedure Number: 0
- Overall Precision: 0.0
- Overall Recall: 0.0
- Overall F1: 0.0
- Overall Accuracy: 0.6672
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
bastianchinchon/nominal-groups-recognition-roberta-clinical-wl-es | bastianchinchon | 2023-07-11T01:28:34Z | 120 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"token-classification",
"generated_from_trainer",
"es",
"dataset:bastianchinchon/spanish_nominal_groups_conll2003",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-07-11T01:00:00Z | ---
language:
- es
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- bastianchinchon/spanish_nominal_groups_conll2003
model-index:
- name: nominal-groups-recognition-roberta-clinical-wl-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nominal-groups-recognition-roberta-clinical-wl-es
This model is a fine-tuned version of [plncmm/roberta-clinical-wl-es](https://huggingface.co/plncmm/roberta-clinical-wl-es) on the bastianchinchon/spanish_nominal_groups_conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2226
- Body Part Precision: 0.7427
- Body Part Recall: 0.7966
- Body Part F1: 0.7687
- Body Part Number: 413
- Disease Precision: 0.7915
- Disease Recall: 0.8174
- Disease F1: 0.8042
- Disease Number: 975
- Family Member Precision: 0.8286
- Family Member Recall: 0.9667
- Family Member F1: 0.8923
- Family Member Number: 30
- Medication Precision: 0.7905
- Medication Recall: 0.8925
- Medication F1: 0.8384
- Medication Number: 93
- Procedure Precision: 0.7105
- Procedure Recall: 0.7814
- Procedure F1: 0.7443
- Procedure Number: 311
- Overall Precision: 0.7666
- Overall Recall: 0.8128
- Overall F1: 0.7890
- Overall Accuracy: 0.9374
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 13
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Body Part Precision | Body Part Recall | Body Part F1 | Body Part Number | Disease Precision | Disease Recall | Disease F1 | Disease Number | Family Member Precision | Family Member Recall | Family Member F1 | Family Member Number | Medication Precision | Medication Recall | Medication F1 | Medication Number | Procedure Precision | Procedure Recall | Procedure F1 | Procedure Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|:--------------:|:-----------------------:|:--------------------:|:----------------:|:--------------------:|:--------------------:|:-----------------:|:-------------:|:-----------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.356 | 1.0 | 1004 | 0.2241 | 0.7283 | 0.7724 | 0.7497 | 413 | 0.7603 | 0.8133 | 0.7859 | 975 | 0.9062 | 0.9667 | 0.9355 | 30 | 0.7547 | 0.8602 | 0.8040 | 93 | 0.6464 | 0.7524 | 0.6954 | 311 | 0.7345 | 0.7986 | 0.7652 | 0.9319 |
| 0.1823 | 2.0 | 2008 | 0.2226 | 0.7427 | 0.7966 | 0.7687 | 413 | 0.7915 | 0.8174 | 0.8042 | 975 | 0.8286 | 0.9667 | 0.8923 | 30 | 0.7905 | 0.8925 | 0.8384 | 93 | 0.7105 | 0.7814 | 0.7443 | 311 | 0.7666 | 0.8128 | 0.7890 | 0.9374 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
JMGaloDoido/distilbert-base-uncased-finetuned-cola | JMGaloDoido | 2023-07-11T01:26:53Z | 61 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-10T23:59:32Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: JMGaloDoido/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# JMGaloDoido/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1936
- Validation Loss: 0.5221
- Train Matthews Correlation: 0.5478
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5211 | 0.4812 | 0.4423 | 0 |
| 0.3244 | 0.4901 | 0.4973 | 1 |
| 0.1936 | 0.5221 | 0.5478 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
casque/VengefulSpiritv0.1 | casque | 2023-07-11T01:20:00Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-11T01:17:11Z | ---
license: creativeml-openrail-m
---
|
MDelan/distilbert-base-uncased-finetuned-cola | MDelan | 2023-07-11T01:19:40Z | 61 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-11T01:14:40Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: MDelan/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MDelan/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1879
- Validation Loss: 0.5580
- Train Matthews Correlation: 0.5127
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5181 | 0.4661 | 0.4379 | 0 |
| 0.3140 | 0.4981 | 0.4774 | 1 |
| 0.1879 | 0.5580 | 0.5127 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
lucs1265/distilbert-base-uncased-finetuned-cola | lucs1265 | 2023-07-11T01:11:57Z | 61 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-11T01:06:54Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: lucs1265/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# lucs1265/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1898
- Validation Loss: 0.5233
- Train Matthews Correlation: 0.5286
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5194 | 0.4536 | 0.4725 | 0 |
| 0.3249 | 0.4763 | 0.4867 | 1 |
| 0.1898 | 0.5233 | 0.5286 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
VitCon/q-Taxi-v3 | VitCon | 2023-07-11T01:07:57Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-11T01:07:55Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.75
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="VitCon/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mrovejaxd/ABL_b | mrovejaxd | 2023-07-11T01:07:35Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-11T00:07:08Z | ---
tags:
- generated_from_trainer
model-index:
- name: ABL_b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ABL_b
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0+cu117
- Datasets 2.12.0
- Tokenizers 0.13.2
|
vimonteglione/distilbert-base-uncased-finetuned-cola | vimonteglione | 2023-07-11T01:05:38Z | 61 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-11T01:01:24Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: vimonteglione/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# vimonteglione/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1934
- Validation Loss: 0.5124
- Train Matthews Correlation: 0.5461
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5192 | 0.4472 | 0.4971 | 0 |
| 0.3259 | 0.4600 | 0.5249 | 1 |
| 0.1934 | 0.5124 | 0.5461 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
MaitreHibou/ppo-SnowballTarget | MaitreHibou | 2023-07-11T01:00:11Z | 16 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2023-07-11T01:00:06Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: MaitreHibou/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
hopkins/strict-small-4 | hopkins | 2023-07-11T00:43:51Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-13T21:25:31Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: strict-small-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# strict-small-4
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8588
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 9
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.9925 | 1.83 | 1000 | 4.2033 |
| 3.7647 | 3.67 | 2000 | 3.9152 |
| 3.3569 | 5.5 | 3000 | 3.8495 |
| 3.0079 | 7.34 | 4000 | 3.8588 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
casque/CrystalMaidenv0.2 | casque | 2023-07-11T00:42:48Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-11T00:39:34Z | ---
license: creativeml-openrail-m
---
|
ALM-AHME/swinv2-large-patch4-window12to16-192to256-22kto1k-ft-finetuned-LungCancer-LC25000-AH | ALM-AHME | 2023-07-11T00:40:15Z | 5 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"swinv2",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-10T02:43:30Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swinv2-large-patch4-window12to16-192to256-22kto1k-ft-finetuned-LungCancer-LC25000-AH
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: Augmented-Final
split: train
args: Augmented-Final
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-large-patch4-window12to16-192to256-22kto1k-ft-finetuned-LungCancer-LC25000-AH
This model is a fine-tuned version of [microsoft/swinv2-large-patch4-window12to16-192to256-22kto1k-ft](https://huggingface.co/microsoft/swinv2-large-patch4-window12to16-192to256-22kto1k-ft) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0002
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.5
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0929 | 1.0 | 281 | 0.0919 | 0.9657 |
| 0.0908 | 2.0 | 562 | 0.0127 | 0.9967 |
| 0.0525 | 3.0 | 843 | 0.0133 | 0.9947 |
| 0.1301 | 4.0 | 1125 | 0.0270 | 0.9927 |
| 0.0624 | 5.0 | 1406 | 0.0064 | 0.9973 |
| 0.0506 | 6.0 | 1687 | 0.0025 | 0.999 |
| 0.0001 | 6.99 | 1967 | 0.0002 | 1.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
layoric/openllama-7b-qlora-orca | layoric | 2023-07-11T00:31:19Z | 4 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-09T23:58:03Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
mrovejaxd/ABL_a | mrovejaxd | 2023-07-10T23:53:17Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-19T13:23:00Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: ABL_a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ABL_a
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7326
- Accuracy: 0.7
- F1: 0.7004
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0+cu117
- Datasets 2.12.0
- Tokenizers 0.13.2
|
jz0214/sd-class-butterflies-64 | jz0214 | 2023-07-10T23:52:24Z | 30 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2023-07-10T23:50:42Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('jz0214/sd-class-butterflies-64')
image = pipeline().images[0]
image
```
|
aliceBG/distilbert-base-uncased-finetuned-cola | aliceBG | 2023-07-10T23:38:28Z | 61 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-09T23:52:39Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: aliceBG/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# aliceBG/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1834
- Validation Loss: 0.5540
- Train Matthews Correlation: 0.5495
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5170 | 0.4723 | 0.4122 | 0 |
| 0.3177 | 0.4714 | 0.5232 | 1 |
| 0.1834 | 0.5540 | 0.5495 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
JBJoyce/whisper-large-v2-finetuned-gtzan | JBJoyce | 2023-07-10T23:32:02Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-07-10T19:35:24Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: whisper-large-v2-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-large-v2-finetuned-gtzan
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7142
- Accuracy: 0.9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0464 | 1.0 | 449 | 1.6761 | 0.42 |
| 0.9369 | 2.0 | 899 | 1.0398 | 0.74 |
| 1.0591 | 3.0 | 1348 | 1.0710 | 0.78 |
| 0.0632 | 4.0 | 1798 | 0.6605 | 0.86 |
| 0.0022 | 5.0 | 2247 | 1.0940 | 0.82 |
| 0.0004 | 6.0 | 2697 | 0.7089 | 0.92 |
| 0.0004 | 7.0 | 3146 | 0.6176 | 0.92 |
| 0.0005 | 8.0 | 3596 | 0.6688 | 0.9 |
| 0.0002 | 9.0 | 4045 | 0.7052 | 0.9 |
| 0.0002 | 9.99 | 4490 | 0.7142 | 0.9 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jz0214/sd-class-butterflies-32 | jz0214 | 2023-07-10T23:09:47Z | 30 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2023-07-10T23:08:46Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('jz0214/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
wesley7137/fal-7B-shard-quantum | wesley7137 | 2023-07-10T22:53:05Z | 0 | 0 | peft | [
"peft",
"pytorch",
"RefinedWebModel",
"custom_code",
"region:us"
] | null | 2023-07-10T22:04:14Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
Subsets and Splits