modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
databio/v2v-geo-hg38 | databio | 2024-02-12T19:02:01Z | 2 | 0 | null | [
"region:us"
] | null | 2023-12-11T20:30:47Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Vec2Vec GEO hg38
## Model Details
### Model Description
This is a Vec2Vec model that encodes embedding vectors of natural language into embedding vectors of BED files. This model was trained with BED files and natural language metadata from [GEO](https://www.ncbi.nlm.nih.gov/geo/) data. The
embedding vectors of natural language were encoded by [sentence-transformers](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2). The BED files were embedded by pretrained [Region2Vec](https://huggingface.co/databio/r2v-ChIP-atlas-hg38-v2)
- **Developed by:** Ziyang "Claude" Hu
- **Model type:** Vec2Vec
- **BED genotype:** hg38
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/databio/geniml
- **Paper [optional]:** N/A
## Uses
This model can be used to search BED files with natural language query strings. In the search interface, the query strings will be encoded by same sentence-transformers model, and the output vector will be
encoded into the final query vector by this Vec2Vec. The K BED files whose embedding vectors (embedded by same Region2Vec) are closest to the final query vector are results. It is limited to hg38. It is not
recommended to use this model for data with genotype outside of hg38
## How to Get Started with the Model
You can download and start encoding new genomic region data using the following code:
```python
from geniml.text2bednn import Vec2VecFNN
model = Vec2VecFNN("databio/v2v-geo-hg38")
```
[More Information Needed]
## Training Details
### Training Data
TODO |
dataautogpt3/ProteusV0.3 | dataautogpt3 | 2024-02-12T18:58:10Z | 87,533 | 93 | diffusers | [
"diffusers",
"text-to-image",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-02-12T18:05:03Z | ---
pipeline_tag: text-to-image
widget:
- text: >-
Anime full body portrait of a swordsman holding his weapon in front of him. He is facing the camera with a fierce look on his face. Anime key visual (best quality, HD, ~+~aesthetic~+~:1.2)
output:
url: upscaled_image.png
- text: >-
spacious,circular underground room,{dirtied and bloodied white tiles},amalgamation,flesh,plastic,dark fabric,core,pulsating heart,limbs,human-like arms,twisted angelic wings,arms,covered in skin,feathers,scales,undulate slowly,unseen current,convulsing,head area,chaotic,mass of eyes,mouths,no human features,smaller forms,cherubs,demons,golden wires,surround,holy light,tv static effect,golden glow,shadows,terrifying essence,overwhelming presence,nightmarish,landscape,sparse,cavernous,eerie,dynamic,motion,striking,awe-inspiring,nightmarish,nightmarish,nightmare,horrifying,bio-mechanical,body horror,amalgamation
output:
url: 2.png
- text: >-
A robot holding a sign saying 'The Application did not respond' in red colors
output:
url: 3.png
- text: >-
A photograph of Hughyen in his early twenties, (an inspiring artist whose art focuses on glitching images and vaporwave color gradients with unexpected conflicting compositions:0.5)
output:
url: 4.png
- text: >-
Anime mugshot of a tough woman. She is holding a prison sign that reads "Proteus". Her face is censored. Anime key visual (best quality, HD, ~+~aesthetic~+~:1.2)
output:
url: 7.png
- text: >-
Glitch art. 1980s anime, vintage, analogue horror. ((static and noise)), chromatic aberration
output:
url: 5.png
- text: >-
Masterpiece, glitch, holy holy holy, fog, by DarkIncursio
output:
url: 6.png
license: gpl-3.0
---
<Gallery />
## ProteusV0.3: The Anime Update
Proteus V0.3 has been advanced with an additional 200,000 anime-related images, further refined by a selection of 15,000 aesthetically pleasing images, enhancing its lighting effects significantly. This upgrade preserves its understanding of prompts and maintains its photorealistic and stylistic capabilities without suffering from catastrophic forgetting.
## Proteus
Proteus serves as a sophisticated enhancement over OpenDalleV1.1, leveraging its core functionalities to deliver superior outcomes. Key areas of advancement include heightened responsiveness to prompts and augmented creative capacities. To achieve this, it was fine-tuned using approximately 220,000 GPTV captioned images from copyright-free stock images (with some anime included), which were then normalized. Additionally, DPO (Direct Preference Optimization) was employed through a collection of 10,000 carefully selected high-quality, AI-generated image pairs.
In pursuit of optimal performance, numerous LORA (Low-Rank Adaptation) models are trained independently before being selectively incorporated into the principal model via dynamic application methods. These techniques involve targeting particular segments within the model while avoiding interference with other areas during the learning phase. Consequently, Proteus exhibits marked improvements in portraying intricate facial characteristics and lifelike skin textures, all while sustaining commendable proficiency across various aesthetic domains, notably surrealism, anime, and cartoon-style visualizations.
## Settings for ProteusV0.3
Use these settings for the best results with ProteusV0.3:
CFG Scale: Use a CFG scale of 8 to 7
Steps: 20 to 60 steps for more detail, 20 steps for faster results.
Sampler: DPM++ 2M SDE
Scheduler: Karras
Resolution: 1280x1280 or 1024x1024
please also consider using these keep words to improve your prompts:
best quality, HD, `~*~aesthetic~*~`.
if you are having trouble coming up with prompts you can use this GPT I put together to help you refine the prompt. https://chat.openai.com/g/g-RziQNoydR-diffusion-master
## Use it with 🧨 diffusers
```python
import torch
from diffusers import (
StableDiffusionXLPipeline,
KDPM2AncestralDiscreteScheduler,
AutoencoderKL
)
# Load VAE component
vae = AutoencoderKL.from_pretrained(
"madebyollin/sdxl-vae-fp16-fix",
torch_dtype=torch.float16
)
# Configure the pipeline
pipe = StableDiffusionXLPipeline.from_pretrained(
"dataautogpt3/ProteusV0.3",
vae=vae,
torch_dtype=torch.float16
)
pipe.scheduler = KDPM2AncestralDiscreteScheduler.from_config(pipe.scheduler.config)
pipe.to('cuda')
# Define prompts and generate image
prompt = "black fluffy gorgeous dangerous cat animal creature, large orange eyes, big fluffy ears, piercing gaze, full moon, dark ambiance, best quality, extremely detailed"
negative_prompt = "nsfw, bad quality, bad anatomy, worst quality, low quality, low resolutions, extra fingers, blur, blurry, ugly, wrongs proportions, watermark, image artifacts, lowres, ugly, jpeg artifacts, deformed, noisy image"
image = pipe(
prompt,
negative_prompt=negative_prompt,
width=1024,
height=1024,
guidance_scale=7,
num_inference_steps=20
).images[0]
```
please support the work I do through donating to me on:
https://www.buymeacoffee.com/DataVoid
or following me on
https://twitter.com/DataPlusEngine |
bartowski/MBeagleX-7B-exl2 | bartowski | 2024-02-12T18:57:00Z | 0 | 0 | null | [
"merge",
"mergekit",
"lazymergekit",
"text-generation",
"license:cc-by-nc-4.0",
"region:us"
] | text-generation | 2024-02-12T18:40:23Z | ---
license: cc-by-nc-4.0
tags:
- merge
- mergekit
- lazymergekit
base_model:
- mlabonne/MBTrix-7B
quantized_by: bartowski
pipeline_tag: text-generation
---
## Exllama v2 Quantizations of MBeagleX-7B
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.13">turboderp's ExLlamaV2 v0.0.13</a> for quantization.
<b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/mlabonne/MBeagleX-7B
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
| ----- | ---- | ------- | ------ | ------ | ------ | ------------ |
| [8_0](https://huggingface.co/bartowski/MBeagleX-7B-exl2/tree/8_0) | 8.0 | 8.0 | 8.4 GB | 9.8 GB | 11.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/bartowski/MBeagleX-7B-exl2/tree/6_5) | 6.5 | 8.0 | 7.2 GB | 8.6 GB | 10.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/bartowski/MBeagleX-7B-exl2/tree/5_0) | 5.0 | 6.0 | 6.0 GB | 7.4 GB | 9.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
| [4_25](https://huggingface.co/bartowski/MBeagleX-7B-exl2/tree/4_25) | 4.25 | 6.0 | 5.3 GB | 6.7 GB | 8.7 GB | GPTQ equivalent bits per weight, slightly higher quality. |
| [3_5](https://huggingface.co/bartowski/MBeagleX-7B-exl2/tree/3_5) | 3.5 | 6.0 | 4.7 GB | 6.1 GB | 8.1 GB | Lower quality, only use if you have to. |
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/MBeagleX-7B-exl2 MBeagleX-7B-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `MBeagleX-7B-exl2`:
```shell
mkdir MBeagleX-7B-exl2
huggingface-cli download bartowski/MBeagleX-7B-exl2 --local-dir MBeagleX-7B-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
Linux:
```shell
mkdir MBeagleX-7B-exl2-6_5
huggingface-cli download bartowski/MBeagleX-7B-exl2 --revision 6_5 --local-dir MBeagleX-7B-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
mkdir MBeagleX-7B-exl2-6.5
huggingface-cli download bartowski/MBeagleX-7B-exl2 --revision 6_5 --local-dir MBeagleX-7B-exl2-6.5 --local-dir-use-symlinks False
```
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski |
nchen909/llama2_7b_sft_20710 | nchen909 | 2024-02-12T18:54:42Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-12T17:05:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
eliotz/a2c-PandaReachDense-v3 | eliotz | 2024-02-12T18:33:53Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-12T18:29:46Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.25 +/- 0.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
indischepartij/MiniCPM-3B-Hephaestus | indischepartij | 2024-02-12T18:28:13Z | 93 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"gmonsoon/MiniCPM-2B-Hercules-v2.0",
"gmonsoon/MiniCPM-2B-OpenHermes-2.5-v2",
"conversational",
"base_model:indischepartij/MiniCPM-3B-Hercules-v2.0",
"base_model:merge:indischepartij/MiniCPM-3B-Hercules-v2.0",
"base_model:indischepartij/MiniCPM-3B-OpenHermes-2.5-v2",
"base_model:merge:indischepartij/MiniCPM-3B-OpenHermes-2.5-v2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-11T05:49:37Z | ---
tags:
- merge
- mergekit
- lazymergekit
- gmonsoon/MiniCPM-2B-Hercules-v2.0
- gmonsoon/MiniCPM-2B-OpenHermes-2.5-v2
base_model:
- gmonsoon/MiniCPM-2B-Hercules-v2.0
- gmonsoon/MiniCPM-2B-OpenHermes-2.5-v2
license: apache-2.0
---
# MiniCPM-2B-Hephaestus
MiniCPM-2B-Hephaestus is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [gmonsoon/MiniCPM-2B-Hercules-v2.0](https://huggingface.co/gmonsoon/MiniCPM-2B-Hercules-v2.0)
* [gmonsoon/MiniCPM-2B-OpenHermes-2.5-v2](https://huggingface.co/gmonsoon/MiniCPM-2B-OpenHermes-2.5-v2)
## 🧩 Configuration
```yaml
models:
- model: gmonsoon/MiniCPM-2B-Hercules-v2.0
parameters:
density: 0.5
weight: 0.5
- model: gmonsoon/MiniCPM-2B-OpenHermes-2.5-v2
parameters:
density: 0.5
weight: 0.5
merge_method: dare_ties
base_model: gmonsoon/MiniCPM-2B-Hercules-v2.0
parameters:
normalize: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "gmonsoon/MiniCPM-2B-Hephaestus"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
veronoicc/DAMGPT-small-ServerSeeker | veronoicc | 2024-02-12T18:28:03Z | 92 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"en",
"de",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-12T18:17:17Z | ---
language:
- en
- de
library_name: transformers
pipeline_tag: conversational
tags:
- conversational
--- |
Zanshinmu/AlienGirl | Zanshinmu | 2024-02-12T18:14:44Z | 8 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:apache-2.0",
"region:us"
] | text-to-image | 2024-02-12T18:14:27Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
<lora:cybergirl_v9_50000_lora_f16:.0.6>, full_body photo, giger style alien
breathtaking Australian colorful future punk Cybergirl, BREAK medium brown
hair, BREAK glowing cyborg eyes BREAK subdermal armor,cyborg arm,, cyborg
exoskeleton melding with flesh, highly detailed, detailed face, psychedelic,
fractal detail, colorful. body horror, glistening with slick filth
parameters:
negative_prompt: bokeh, blurry, 3d, anime, drawing, art
output:
url: images/00011-2153680076.png
- text: >-
<lora:cybergirl_v9_50000_lora_f16:.0.6>, full_body photo, giger style alien
piercings Romani military future punk Cybergirl, BREAK long natural hair,
BREAK tech sunglasses BREAK cosmetic implants,, cyborg exoskeleton melding
with flesh, highly detailed, detailed face, psychedelic, fractal detail,
colorful. body horror, glistening with slick filth
parameters:
negative_prompt: bokeh, blurry, 3d, anime, drawing, art
output:
url: images/00009-2153680074.png
- text: >-
<lora:cybergirl_v9_50000_lora_f16:.0.6>, full_body photo, giger style alien
piercings Caucasian dark future punk Cybergirl, BREAK long natural hair,
BREAK gorgeous eyes BREAK visible cyborg implants on face,cyborg limb,,
cyborg exoskeleton melding with flesh, highly detailed, detailed face,
psychedelic, fractal detail, colorful. body horror, glistening with slick
filth
parameters:
negative_prompt: bokeh, blurry, 3d, anime, drawing, art
output:
url: images/00008-2153680073.png
- text: >-
<lora:cybergirl_v9_50000_lora_f16:.0.6>, full_body photo, giger style alien
gothic Australian trenchcoat over bodysuit future punk Cybergirl, BREAK
short natural hair, BREAK glowing cyborg eyes BREAK cyborg limb,, cyborg
exoskeleton melding with flesh, highly detailed, detailed face, psychedelic,
fractal detail, colorful. body horror, glistening with slick filth
parameters:
negative_prompt: bokeh, blurry, 3d, anime, drawing, art
output:
url: images/00007-2153680072.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: CyberGirl, giger style
license: apache-2.0
---
# AlienGirl
<Gallery />
## Model description
This LoRA was a quick-and-dirty effort from images I created with my CyberGirl LoRA.
## Trigger words
You should use `CyberGirl` to trigger the image generation.
You should use `giger style` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Zanshinmu/AlienGirl/tree/main) them in the Files & versions tab.
|
karimimanesh/text_stance_detection_v2 | karimimanesh | 2024-02-12T18:06:36Z | 176 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-12T18:06:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Klark333/darkfantasy | Klark333 | 2024-02-12T17:47:00Z | 69 | 6 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:unknown",
"region:us"
] | text-to-image | 2024-02-12T17:46:39Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/67adffb4cd7472105f5c8499fa445d73.jpg
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: dark fantasy 1970-1980's
license: unknown
---
# 1970' dark fantasy
<Gallery />
## Model description
80's movie , dark fantasy , poster , illustration 80s dark fantasy, 80s film comics aesthetic fantasy
## Trigger words
You should use `dark fantasy 1970-1980's` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Klark333/darkfantasy/tree/main) them in the Files & versions tab.
|
ayush753/my-pet-dog-xyz | ayush753 | 2024-02-12T17:41:23Z | 1 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-02-12T17:33:38Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog-XYZ Dreambooth model trained by ayush753 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 4SF21AD012
Sample pictures of this concept:

|
furrutiav/bert_qa_extractor_cockatiel_2022_baseline_signal_over_subsample_it_749 | furrutiav | 2024-02-12T17:37:31Z | 91 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-02-12T17:37:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sam2ai/qwen_1.5_odia_4b | sam2ai | 2024-02-12T17:29:23Z | 2 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-4B",
"base_model:adapter:Qwen/Qwen1.5-4B",
"license:other",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-02-11T17:43:28Z | ---
license: other
library_name: peft
tags:
- axolotl
- generated_from_trainer
base_model: Qwen/Qwen1.5-4B
model-index:
- name: qwen_1.5_odia_4b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: Qwen/Qwen1.5-4B
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
# is_qwen_derived_model: true
trust_remote_code: true
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: OdiaGenAI/all_combined_odia_171k
type: alpaca:chatml
dataset_prepared_path:
val_set_size: 0.05
output_dir: ./lora-out-qwen-4b-odia
hub_model_id: sam2ai/qwen_1.5_odia_4b
sequence_len: 2048 # supports up to 8192
sample_packing: false
pad_to_sequence_len:
adapter: qlora
lora_model_dir:
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: Qwen-instruct-4b-odia
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 2
micro_batch_size: 1
num_epochs: 4
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: false
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention:
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
eval_table_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
```
</details><br>
# qwen_1.5_odia_4b
This model is a fine-tuned version of [Qwen/Qwen1.5-4B](https://huggingface.co/Qwen/Qwen1.5-4B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3421
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.977 | 0.0 | 1 | 1.0190 |
| 0.4901 | 0.25 | 2108 | 0.4872 |
| 0.3966 | 0.5 | 4216 | 0.4347 |
| 0.3127 | 0.75 | 6324 | 0.4104 |
| 0.3172 | 1.0 | 8432 | 0.3932 |
| 0.281 | 1.25 | 10540 | 0.3778 |
| 0.2845 | 1.5 | 12648 | 0.3684 |
| 0.2459 | 1.75 | 14756 | 0.3616 |
| 0.1641 | 2.0 | 16864 | 0.3525 |
| 0.2121 | 2.25 | 18972 | 0.3506 |
| 0.2564 | 2.5 | 21080 | 0.3448 |
| 0.1378 | 2.75 | 23188 | 0.3426 |
| 0.2002 | 3.0 | 25296 | 0.3409 |
| 0.1671 | 3.25 | 27404 | 0.3439 |
| 0.1464 | 3.5 | 29512 | 0.3421 |
| 0.1741 | 3.75 | 31620 | 0.3421 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.0
- Pytorch 2.0.1+gita61a294
- Datasets 2.16.1
- Tokenizers 0.15.0 |
gayanin/bart-with-pubmed-asr-noise-data-0.1-v2 | gayanin | 2024-02-12T17:28:00Z | 91 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:gayanin/bart-with-pubmed-noise-data-0.1-v2",
"base_model:finetune:gayanin/bart-with-pubmed-noise-data-0.1-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-12T17:23:48Z | ---
license: apache-2.0
base_model: gayanin/bart-with-pubmed-noise-data-0.1-v2
tags:
- generated_from_trainer
model-index:
- name: bart-with-pubmed-asr-noise-data-0.1-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-with-pubmed-asr-noise-data-0.1-v2
This model is a fine-tuned version of [gayanin/bart-with-pubmed-noise-data-0.1-v2](https://huggingface.co/gayanin/bart-with-pubmed-noise-data-0.1-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3346
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.4242 | 0.87 | 500 | 0.3986 |
| 0.2914 | 1.73 | 1000 | 0.3416 |
| 0.2518 | 2.6 | 1500 | 0.3346 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.2+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
|
GccX11/q-Taxi-v3 | GccX11 | 2024-02-12T17:24:23Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-12T17:24:22Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="GccX11/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
GccX11/q-FrozenLake-v1-4x4-noSlippery | GccX11 | 2024-02-12T17:16:35Z | 0 | 0 | null | [
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-12T17:16:34Z | ---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
metrics:
- type: mean_reward
value: 0.61 +/- 0.49
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="GccX11/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Kudod/my_awesome_model_IMDB | Kudod | 2024-02-12T17:05:09Z | 22 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:finiteautomata/bertweet-base-sentiment-analysis",
"base_model:finetune:finiteautomata/bertweet-base-sentiment-analysis",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-10T04:52:46Z | ---
base_model: finiteautomata/bertweet-base-sentiment-analysis
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_model_IMDB
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model_IMDB
This model is a fine-tuned version of [finiteautomata/bertweet-base-sentiment-analysis](https://huggingface.co/finiteautomata/bertweet-base-sentiment-analysis) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6664
- Accuracy: 0.8949
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3261 | 1.0 | 782 | 0.2674 | 0.8903 |
| 0.2072 | 2.0 | 1564 | 0.3035 | 0.8820 |
| 0.1408 | 3.0 | 2346 | 0.3532 | 0.8967 |
| 0.0876 | 4.0 | 3128 | 0.4793 | 0.8922 |
| 0.0661 | 5.0 | 3910 | 0.4755 | 0.8925 |
| 0.0373 | 6.0 | 4692 | 0.5159 | 0.8937 |
| 0.034 | 7.0 | 5474 | 0.5527 | 0.8923 |
| 0.0264 | 8.0 | 6256 | 0.6391 | 0.8947 |
| 0.0179 | 9.0 | 7038 | 0.6491 | 0.8942 |
| 0.0094 | 10.0 | 7820 | 0.6664 | 0.8949 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu117
- Datasets 2.17.0
- Tokenizers 0.14.0
|
stablediffusionapi/hima | stablediffusionapi | 2024-02-12T16:59:08Z | 29 | 1 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-02-12T16:57:30Z | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# Hima API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "hima"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/hima)
Model link: [View model](https://modelslab.com/models/hima)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "hima",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
gayanin/bart-with-woz-pubmed-noise-data-0.1-v2 | gayanin | 2024-02-12T16:49:50Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:gayanin/bart-with-woz-noise-data-0.1-v2",
"base_model:finetune:gayanin/bart-with-woz-noise-data-0.1-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-12T16:17:06Z | ---
license: apache-2.0
base_model: gayanin/bart-with-woz-noise-data-0.1-v2
tags:
- generated_from_trainer
model-index:
- name: bart-with-woz-pubmed-noise-data-0.1-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-with-woz-pubmed-noise-data-0.1-v2
This model is a fine-tuned version of [gayanin/bart-with-woz-noise-data-0.1-v2](https://huggingface.co/gayanin/bart-with-woz-noise-data-0.1-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2136
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.395 | 0.11 | 500 | 0.3361 |
| 0.3239 | 0.21 | 1000 | 0.2993 |
| 0.2485 | 0.32 | 1500 | 0.2899 |
| 0.3632 | 0.43 | 2000 | 0.2650 |
| 0.3141 | 0.54 | 2500 | 0.2555 |
| 0.2913 | 0.64 | 3000 | 0.2537 |
| 0.2587 | 0.75 | 3500 | 0.2474 |
| 0.2745 | 0.86 | 4000 | 0.2408 |
| 0.2725 | 0.96 | 4500 | 0.2362 |
| 0.2025 | 1.07 | 5000 | 0.2468 |
| 0.2088 | 1.18 | 5500 | 0.2368 |
| 0.1912 | 1.28 | 6000 | 0.2447 |
| 0.2098 | 1.39 | 6500 | 0.2311 |
| 0.1839 | 1.5 | 7000 | 0.2336 |
| 0.2407 | 1.61 | 7500 | 0.2280 |
| 0.1692 | 1.71 | 8000 | 0.2229 |
| 0.1965 | 1.82 | 8500 | 0.2220 |
| 0.2013 | 1.93 | 9000 | 0.2175 |
| 0.1455 | 2.03 | 9500 | 0.2243 |
| 0.1466 | 2.14 | 10000 | 0.2235 |
| 0.1493 | 2.25 | 10500 | 0.2223 |
| 0.1224 | 2.35 | 11000 | 0.2207 |
| 0.1491 | 2.46 | 11500 | 0.2173 |
| 0.1484 | 2.57 | 12000 | 0.2175 |
| 0.1582 | 2.68 | 12500 | 0.2175 |
| 0.1592 | 2.78 | 13000 | 0.2137 |
| 0.1467 | 2.89 | 13500 | 0.2153 |
| 0.1637 | 3.0 | 14000 | 0.2136 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.2+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
|
julep-ai/samantha-1-tokenizer | julep-ai | 2024-02-12T16:44:48Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-12T16:35:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Vargol/ProteusV0.2 | Vargol | 2024-02-12T16:40:33Z | 45 | 0 | diffusers | [
"diffusers",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-02-12T16:03:28Z | ---
license: gpl-3.0
---
This a an fp16 variant of Proteus V2.0
https://huggingface.co/dataautogpt3/ProteusV0.2
currently under the gpl-v3 licence.
simply created by
```py
import torch
from diffusers import DiffusionPipeline
pipeline = DiffusionPipeline.from_pretrained("dataautogpt3/ProteusV0.2", torch_dtype=torch.float16)
pipeline.save_pretrained('fp16_ProteusV0.2', safe_serialization=True, variant='fp16')
```
See the original model for details.
The fp32 version of the model, even when converted to fp16 when loading, uses up to much RAM
hence my need for this version.
Dave
|
stablediffusionapi/generator2000xl | stablediffusionapi | 2024-02-12T16:32:54Z | 29 | 2 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-02-12T16:31:04Z | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# generator2000xl API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "generator2000xl"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/generator2000xl)
Model link: [View model](https://modelslab.com/models/generator2000xl)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "generator2000xl",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
cybert79/spamai | cybert79 | 2024-02-12T16:31:47Z | 117 | 4 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"dataset:SetFit/enron_spam",
"dataset:Deysi/spam-detection-dataset",
"license:unknown",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-12T13:42:50Z | ---
license: unknown
datasets:
- SetFit/enron_spam
- Deysi/spam-detection-dataset
metrics:
- accuracy
---
# Model Card for Spam Detection Model
This model card outlines a spam detection model trained on the SetFit/enron_spam and Deysi/spam-detection-dataset from Hugging Face. The model aims to classify emails or text messages into spam or not spam (ham) with high accuracy, leveraging the BERT architecture for natural language processing tasks.
## Model Details
### Model Description
This spam detection model was developed to identify and filter out unwanted or harmful emails and messages automatically. It was fine-tuned on two significant datasets featuring real-world spam examples, demonstrating a high level of accuracy in distinguishing between spam and ham.
- **Developed by:** AI and cybersecurity researchers.
- **Model type:** BERT for Sequence Classification.
- **Language(s) (NLP):** English.
- **License:** Unknown.
- **Finetuned from model:** `bert-base-uncased`.
## Uses
### Direct Use
The model is intended for direct use in email filtering systems, cybersecurity applications, and any platform needing to identify spam content within text data.
### Out-of-Scope Use
The model is not designed for identifying phishing attempts, detecting malware within attachments, or other security threats beyond the scope of text-based spam content. It may not perform well on texts significantly different from those found in the training datasets, such as messages in languages other than English or texts from domains vastly different from emails.
## Bias, Risks, and Limitations
The model's performance is highly dependent on the nature and diversity of the training data. There might be biases in the datasets that could affect the model's predictions, particularly for edge cases or underrepresented categories of spam. Users should be aware of these limitations and consider additional layers of security and content moderation according to their specific needs.
## How to Get Started with the Model
To get started with the model, load the pretrained model and tokenizer from the specified directory and use them to preprocess your text data. The model can then be applied to classify texts as spam or not spam.
## Training Details
### Training Data
The model was trained on the SetFit/enron_spam and Deysi/spam-detection-dataset, which include a variety of spam and ham examples collected from real-world email data.
### Training Procedure
The model was fine-tuned for 3 epochs, achieving a final training loss of 0.0239 and an accuracy of 99.55% on the evaluation set. Training was conducted using a batch size of 8, with a learning rate of 2e-5.
## Evaluation
### Testing Data, Factors & Metrics
The evaluation was performed on a test split from the datasets, focusing on the accuracy metric to assess the model's performance.
### Results
The model achieved an evaluation accuracy of 99.55% with an evaluation loss of 0.0448, indicating excellent performance in distinguishing between spam and ham messages.
## Environmental Impact
Given the high accuracy and low loss, this model presents a robust solution for spam detection tasks. However, users are encouraged to assess the model's applicability to their specific use cases, considering potential biases and the model's limitations.
|
furrutiav/bert_qa_extractor_cockatiel_2022_mixtral_v2_over_subsample_it_141 | furrutiav | 2024-02-12T16:25:06Z | 91 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-02-12T16:24:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
macabdul9/t5-glue-all-900K | macabdul9 | 2024-02-12T16:21:22Z | 93 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-12T16:07:10Z | ---
license: apache-2.0
base_model: google-t5/t5-small
tags:
- generated_from_trainer
model-index:
- name: t5-glue-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-glue-all
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0692
- Em accuracy: 89.1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
ppsingh/iki_sector_setfit | ppsingh | 2024-02-12T16:17:29Z | 54 | 0 | setfit | [
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:GIZ/SECTOR-multilabel-mpnet_w",
"base_model:finetune:GIZ/SECTOR-multilabel-mpnet_w",
"co2_eq_emissions",
"region:us"
] | text-classification | 2024-02-12T15:28:40Z | ---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- accuracy
widget:
- text: Specific information applicable to Parties, including regional economic integration
organizations and their member States, that have reached an agreement to act jointly
under Article 4, paragraph 2, of the Paris Agreement, including the Parties that
agreed to act jointly and the terms of the agreement, in accordance with Article
4, paragraphs 16–18, of the Paris Agreement. Not applicable. (c). How the Party’s
preparation of its nationally determined contribution has been informed by the
outcomes of the global stocktake, in accordance with Article 4, paragraph 9, of
the Paris Agreement.
- text: 'In the shipping and aviation sectors, emission reduction efforts will be
focused on distributing eco-friendly ships and enhancing the operational efficiency
of aircraft. Agriculture, livestock farming and fisheries: The Republic Korea
is introducing various options to accelerate low-carbon farming, for instance,
improving irrigation techniques in rice paddies and adopting low-input systems
for nitrogen fertilizers.'
- text: As part of this commitment, Oman s upstream oil and gas industry is developing
economically viable solutions to phase out routine flaring as quickly as possible
and ahead of the World Bank s target date. IV. Climate Preparedness and Resilience.
The Sultanate of Oman has stepped up its efforts in advancing its expertise and
methodologies to better manage the climate change risks over the past five years.
The adaptation efforts are underway, and the status of adaptation planning is
still at a nascent stage.
- text: 'Synergy and coherence 46 VII- Gender and youth 46 VIII- Education and employment
48 ANNEXES. 49 Annex No. 1: Details of mitigation measures, conditional and non-conditional,
by sector 49 Annex No.2: List of adaptation actions proposed by sectors. 57 Annex
No.3: GCF project portfolio. 63 CONTRIBUTION DENTERMINEE AT NATIONAL LEVEL CDN
MAURITANIE LIST OF TABLES Table 1: Summary of funding needs for the CND 2021-2030
updated. 12 Table 2: CND 2021-2030 mitigation measures updated by sector (cumulative
cost and reduction potential for the period). 14 Table 3: CND 2021-2030 adaptation
measures updated by sector. Error!'
- text: In the transport sector, restructuing is planned through a number of large
infrastructure initiatives aiming to revive the role of public transport and achieving
a relevant share of fuel efficient vehicles. Under both the conditional and unconditional
mitigation scenarios, Lebanon will achieve sizeable emission reductions. With
regards to adaptation, Lebanon has planned comprehensive sectoral actions related
to water, agriculture/forestry and biodiversity, for example related to irrigation,
forest management, etc. It also continues developing adaptation strategies in
the remaining sectors.
pipeline_tag: text-classification
inference: false
co2_eq_emissions:
emissions: 25.8151164022705
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: Intel(R) Xeon(R) CPU @ 2.00GHz
ram_total_size: 12.674781799316406
hours_used: 0.622
hardware_used: 1 x Tesla T4
base_model: ppsingh/SECTOR-multilabel-mpnet_w
---
# SetFit with ppsingh/SECTOR-multilabel-mpnet_w
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [ppsingh/SECTOR-multilabel-mpnet_w](https://huggingface.co/ppsingh/SECTOR-multilabel-mpnet_w) as the Sentence Transformer embedding model. A [SetFitHead](huggingface.co/docs/setfit/reference/main#setfit.SetFitHead) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [ppsingh/SECTOR-multilabel-mpnet_w](https://huggingface.co/ppsingh/SECTOR-multilabel-mpnet_w)
- **Classification head:** a [SetFitHead](huggingface.co/docs/setfit/reference/main#setfit.SetFitHead) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 4 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("ppsingh/iki_sector_setfit")
# Run inference
preds = model("In the shipping and aviation sectors, emission reduction efforts will be focused on distributing eco-friendly ships and enhancing the operational efficiency of aircraft. Agriculture, livestock farming and fisheries: The Republic Korea is introducing various options to accelerate low-carbon farming, for instance, improving irrigation techniques in rice paddies and adopting low-input systems for nitrogen fertilizers.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 35 | 76.164 | 170 |
- Training Dataset: 250
| Class | Positive Count of Class|
|:-------------|:--------|
| Economy-wide | 88 |
| Energy | 63 |
| Other Sector | 64 |
| Transport | 139 |
- Validation Dataset: 42
| Class | Positive Count of Class|
|:-------------|:--------|
| Economy-wide | 15 |
| Energy | 11 |
| Other Sector | 11 |
| Transport | 24 |
### Training Hyperparameters
- batch_size: (16, 2)
- num_epochs: (1, 10)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0005 | 1 | 0.2029 | - |
| 0.0993 | 200 | 0.0111 | 0.1124 |
| 0.1985 | 400 | 0.0063 | 0.111 |
| 0.2978 | 600 | 0.0183 | 0.1214 |
| 0.3970 | 800 | 0.0197 | 0.1248 |
| 0.4963 | 1000 | 0.0387 | 0.1339 |
| 0.5955 | 1200 | 0.0026 | 0.1181 |
| 0.6948 | 1400 | 0.0378 | 0.1208 |
| 0.7940 | 1600 | 0.0285 | 0.1267 |
| 0.8933 | 1800 | 0.0129 | 0.1254 |
| 0.9926 | 2000 | 0.0341 | 0.1271 |
### Classifier Training Results
| Epoch | Training F1-micro|Training F1-Samples |Training F1-weighted|Validation F1-micro |Validation F1-samples |Validation F1-weighted |
|:------:|:----------------:|:------------------:|:------------------:|:------------------:|:--------------------:|:---------------------:|
| 0 | 0.954 | 0.972 | 0.945 |0.824 | 0.819 | 0.813 |
| 1 | 0.994 | 0.996 | 0.994 |0.850 | 0.832 | 0.852 |
| 2 | 0.981 | 0.989 | 0.979 |0.850 | 0.843 | 0.852 |
| 3 | 0.995 | 0.997 | 0.995 |0.852 | 0.843 | 0.858 |
| 4 | 0.994 | 0.996 | 0.994 |0.852 | 0.843 | 0.858 |
| 5 | 0.995 | 0.997 | 0.995 |0.859 | 0.848 | 0.863 |
|label | precision |recall |f1-score| support|
|:-------------:|:---------:|:-----:|:------:|:------:|
|Economy-wide |0.857 |0.800 |0.827 | 15.0 |
|Energy |1.00 |0.818 |0.900 | 11.0 |
|Other Sector |0.615 |0.727 |0.667 | 11.0 |
|Transport |0.958 |0.958 |0.958 | 24.0 |
- Micro Avg: Precision = 0.866, Recall = 0.852, F1 = 0.859504
- Samples Avg: Precision = 0.869, Recall = 0.861, F1 = 0.848
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Carbon Emitted**: 0.026 kg of CO2
- **Hours Used**: 0.622 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x Tesla T4
- **CPU Model**: Intel(R) Xeon(R) CPU @ 2.00GHz
- **RAM Size**: 12.67 GB
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.3.1
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.3.0
- Tokenizers: 0.15.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
rishabhjain16/whisper-tiny | rishabhjain16 | 2024-02-12T16:06:47Z | 67 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"whisper",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"ro",
"da",
"hu",
"ta",
"no",
"th",
"ur",
"hr",
"bg",
"lt",
"la",
"mi",
"ml",
"cy",
"sk",
"te",
"fa",
"lv",
"bn",
"sr",
"az",
"sl",
"kn",
"et",
"mk",
"br",
"eu",
"is",
"hy",
"ne",
"mn",
"bs",
"kk",
"sq",
"sw",
"gl",
"mr",
"pa",
"si",
"km",
"sn",
"yo",
"so",
"af",
"oc",
"ka",
"be",
"tg",
"sd",
"gu",
"am",
"yi",
"lo",
"uz",
"fo",
"ht",
"ps",
"tk",
"nn",
"mt",
"sa",
"lb",
"my",
"bo",
"tl",
"mg",
"as",
"tt",
"haw",
"ln",
"ha",
"ba",
"jw",
"su",
"arxiv:2212.04356",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-02-12T16:06:47Z | ---
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- no
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: whisper-tiny
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 7.54
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 17.15
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: hi
split: test
args:
language: hi
metrics:
- name: Test WER
type: wer
value: 141
pipeline_tag: automatic-speech-recognition
license: apache-2.0
---
# Whisper
Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours
of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains **without** the need
for fine-tuning.
Whisper was proposed in the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://arxiv.org/abs/2212.04356)
by Alec Radford et al from OpenAI. The original code repository can be found [here](https://github.com/openai/whisper).
**Disclaimer**: Content for this model card has partly been written by the Hugging Face team, and parts of it were
copied and pasted from the original model card.
## Model details
Whisper is a Transformer based encoder-decoder model, also referred to as a _sequence-to-sequence_ model.
It was trained on 680k hours of labelled speech data annotated using large-scale weak supervision.
The models were trained on either English-only data or multilingual data. The English-only models were trained
on the task of speech recognition. The multilingual models were trained on both speech recognition and speech
translation. For speech recognition, the model predicts transcriptions in the *same* language as the audio.
For speech translation, the model predicts transcriptions to a *different* language to the audio.
Whisper checkpoints come in five configurations of varying model sizes.
The smallest four are trained on either English-only or multilingual data.
The largest checkpoints are multilingual only. All ten of the pre-trained checkpoints
are available on the [Hugging Face Hub](https://huggingface.co/models?search=openai/whisper). The
checkpoints are summarised in the following table with links to the models on the Hub:
| Size | Parameters | English-only | Multilingual |
|----------|------------|------------------------------------------------------|-----------------------------------------------------|
| tiny | 39 M | [✓](https://huggingface.co/openai/whisper-tiny.en) | [✓](https://huggingface.co/openai/whisper-tiny) |
| base | 74 M | [✓](https://huggingface.co/openai/whisper-base.en) | [✓](https://huggingface.co/openai/whisper-base) |
| small | 244 M | [✓](https://huggingface.co/openai/whisper-small.en) | [✓](https://huggingface.co/openai/whisper-small) |
| medium | 769 M | [✓](https://huggingface.co/openai/whisper-medium.en) | [✓](https://huggingface.co/openai/whisper-medium) |
| large | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large) |
| large-v2 | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large-v2) |
# Usage
To transcribe audio samples, the model has to be used alongside a [`WhisperProcessor`](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperProcessor).
The `WhisperProcessor` is used to:
1. Pre-process the audio inputs (converting them to log-Mel spectrograms for the model)
2. Post-process the model outputs (converting them from tokens to text)
The model is informed of which task to perform (transcription or translation) by passing the appropriate "context tokens". These context tokens
are a sequence of tokens that are given to the decoder at the start of the decoding process, and take the following order:
1. The transcription always starts with the `<|startoftranscript|>` token
2. The second token is the language token (e.g. `<|en|>` for English)
3. The third token is the "task token". It can take one of two values: `<|transcribe|>` for speech recognition or `<|translate|>` for speech translation
4. In addition, a `<|notimestamps|>` token is added if the model should not include timestamp prediction
Thus, a typical sequence of context tokens might look as follows:
```
<|startoftranscript|> <|en|> <|transcribe|> <|notimestamps|>
```
Which tells the model to decode in English, under the task of speech recognition, and not to predict timestamps.
These tokens can either be forced or un-forced. If they are forced, the model is made to predict each token at
each position. This allows one to control the output language and task for the Whisper model. If they are un-forced,
the Whisper model will automatically predict the output langauge and task itself.
The context tokens can be set accordingly:
```python
model.config.forced_decoder_ids = WhisperProcessor.get_decoder_prompt_ids(language="english", task="transcribe")
```
Which forces the model to predict in English under the task of speech recognition.
## Transcription
### English to English
In this example, the context tokens are 'unforced', meaning the model automatically predicts the output language
(English) and task (transcribe).
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-tiny")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny")
>>> model.config.forced_decoder_ids = None
>>> # load dummy dataset and read audio files
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> sample = ds[0]["audio"]
>>> input_features = processor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=False)
['<|startoftranscript|><|en|><|transcribe|><|notimestamps|> Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.<|endoftext|>']
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.']
```
The context tokens can be removed from the start of the transcription by setting `skip_special_tokens=True`.
### French to French
The following example demonstrates French to French transcription by setting the decoder ids appropriately.
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import Audio, load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-tiny")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny")
>>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="transcribe")
>>> # load streaming dataset and read first audio sample
>>> ds = load_dataset("common_voice", "fr", split="test", streaming=True)
>>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
>>> input_speech = next(iter(ds))["audio"]
>>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids)
['<|startoftranscript|><|fr|><|transcribe|><|notimestamps|> Un vrai travail intéressant va enfin être mené sur ce sujet.<|endoftext|>']
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' Un vrai travail intéressant va enfin être mené sur ce sujet.']
```
## Translation
Setting the task to "translate" forces the Whisper model to perform speech translation.
### French to English
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import Audio, load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-tiny")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny")
>>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="translate")
>>> # load streaming dataset and read first audio sample
>>> ds = load_dataset("common_voice", "fr", split="test", streaming=True)
>>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
>>> input_speech = next(iter(ds))["audio"]
>>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' A very interesting work, we will finally be given on this subject.']
```
## Evaluation
This code snippet shows how to evaluate Whisper Tiny on [LibriSpeech test-clean](https://huggingface.co/datasets/librispeech_asr):
```python
>>> from datasets import load_dataset
>>> from transformers import WhisperForConditionalGeneration, WhisperProcessor
>>> import torch
>>> from evaluate import load
>>> librispeech_test_clean = load_dataset("librispeech_asr", "clean", split="test")
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-tiny")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny").to("cuda")
>>> def map_to_pred(batch):
>>> audio = batch["audio"]
>>> input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features
>>> batch["reference"] = processor.tokenizer._normalize(batch['text'])
>>>
>>> with torch.no_grad():
>>> predicted_ids = model.generate(input_features.to("cuda"))[0]
>>> transcription = processor.decode(predicted_ids)
>>> batch["prediction"] = processor.tokenizer._normalize(transcription)
>>> return batch
>>> result = librispeech_test_clean.map(map_to_pred)
>>> wer = load("wer")
>>> print(100 * wer.compute(references=result["reference"], predictions=result["prediction"]))
7.547098647858638
```
## Long-Form Transcription
The Whisper model is intrinsically designed to work on audio samples of up to 30s in duration. However, by using a chunking
algorithm, it can be used to transcribe audio samples of up to arbitrary length. This is possible through Transformers
[`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline)
method. Chunking is enabled by setting `chunk_length_s=30` when instantiating the pipeline. With chunking enabled, the pipeline
can be run with batched inference. It can also be extended to predict sequence level timestamps by passing `return_timestamps=True`:
```python
>>> import torch
>>> from transformers import pipeline
>>> from datasets import load_dataset
>>> device = "cuda:0" if torch.cuda.is_available() else "cpu"
>>> pipe = pipeline(
>>> "automatic-speech-recognition",
>>> model="openai/whisper-tiny",
>>> chunk_length_s=30,
>>> device=device,
>>> )
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> sample = ds[0]["audio"]
>>> prediction = pipe(sample.copy(), batch_size=8)["text"]
" Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel."
>>> # we can also return timestamps for the predictions
>>> prediction = pipe(sample.copy(), batch_size=8, return_timestamps=True)["chunks"]
[{'text': ' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.',
'timestamp': (0.0, 5.44)}]
```
Refer to the blog post [ASR Chunking](https://huggingface.co/blog/asr-chunking) for more details on the chunking algorithm.
## Fine-Tuning
The pre-trained Whisper model demonstrates a strong ability to generalise to different datasets and domains. However,
its predictive capabilities can be improved further for certain languages and tasks through *fine-tuning*. The blog
post [Fine-Tune Whisper with 🤗 Transformers](https://huggingface.co/blog/fine-tune-whisper) provides a step-by-step
guide to fine-tuning the Whisper model with as little as 5 hours of labelled data.
### Evaluated Use
The primary intended users of these models are AI researchers studying robustness, generalization, capabilities, biases, and constraints of the current model. However, Whisper is also potentially quite useful as an ASR solution for developers, especially for English speech recognition. We recognize that once models are released, it is impossible to restrict access to only “intended” uses or to draw reasonable guidelines around what is or is not research.
The models are primarily trained and evaluated on ASR and speech translation to English tasks. They show strong ASR results in ~10 languages. They may exhibit additional capabilities, particularly if fine-tuned on certain tasks like voice activity detection, speaker classification, or speaker diarization but have not been robustly evaluated in these areas. We strongly recommend that users perform robust evaluations of the models in a particular context and domain before deploying them.
In particular, we caution against using Whisper models to transcribe recordings of individuals taken without their consent or purporting to use these models for any kind of subjective classification. We recommend against use in high-risk domains like decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes. The models are intended to transcribe and translate speech, use of the model for classification is not only not evaluated but also not appropriate, particularly to infer human attributes.
## Training Data
The models are trained on 680,000 hours of audio and the corresponding transcripts collected from the internet. 65% of this data (or 438,000 hours) represents English-language audio and matched English transcripts, roughly 18% (or 126,000 hours) represents non-English audio and English transcripts, while the final 17% (or 117,000 hours) represents non-English audio and the corresponding transcript. This non-English data represents 98 different languages.
As discussed in [the accompanying paper](https://cdn.openai.com/papers/whisper.pdf), we see that performance on transcription in a given language is directly correlated with the amount of training data we employ in that language.
## Performance and Limitations
Our studies show that, over many existing ASR systems, the models exhibit improved robustness to accents, background noise, technical language, as well as zero shot translation from multiple languages into English; and that accuracy on speech recognition and translation is near the state-of-the-art level.
However, because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself.
Our models perform unevenly across languages, and we observe lower accuracy on low-resource and/or low-discoverability languages or languages where we have less training data. The models also exhibit disparate performance on different accents and dialects of particular languages, which may include higher word error rate across speakers of different genders, races, ages, or other demographic criteria. Our full evaluation results are presented in [the paper accompanying this release](https://cdn.openai.com/papers/whisper.pdf).
In addition, the sequence-to-sequence architecture of the model makes it prone to generating repetitive texts, which can be mitigated to some degree by beam search and temperature scheduling but not perfectly. Further analysis on these limitations are provided in [the paper](https://cdn.openai.com/papers/whisper.pdf). It is likely that this behavior and hallucinations may be worse on lower-resource and/or lower-discoverability languages.
## Broader Implications
We anticipate that Whisper models’ transcription capabilities may be used for improving accessibility tools. While Whisper models cannot be used for real-time transcription out of the box – their speed and size suggest that others may be able to build applications on top of them that allow for near-real-time speech recognition and translation. The real value of beneficial applications built on top of Whisper models suggests that the disparate performance of these models may have real economic implications.
There are also potential dual use concerns that come with releasing Whisper. While we hope the technology will be used primarily for beneficial purposes, making ASR technology more accessible could enable more actors to build capable surveillance technologies or scale up existing surveillance efforts, as the speed and accuracy allow for affordable automatic transcription and translation of large volumes of audio communication. Moreover, these models may have some capabilities to recognize specific individuals out of the box, which in turn presents safety concerns related both to dual use and disparate performance. In practice, we expect that the cost of transcription is not the limiting factor of scaling up surveillance projects.
### BibTeX entry and citation info
```bibtex
@misc{radford2022whisper,
doi = {10.48550/ARXIV.2212.04356},
url = {https://arxiv.org/abs/2212.04356},
author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
title = {Robust Speech Recognition via Large-Scale Weak Supervision},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
|
rishabhjain16/whisper-small | rishabhjain16 | 2024-02-12T16:06:01Z | 72 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"whisper",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"ro",
"da",
"hu",
"ta",
"no",
"th",
"ur",
"hr",
"bg",
"lt",
"la",
"mi",
"ml",
"cy",
"sk",
"te",
"fa",
"lv",
"bn",
"sr",
"az",
"sl",
"kn",
"et",
"mk",
"br",
"eu",
"is",
"hy",
"ne",
"mn",
"bs",
"kk",
"sq",
"sw",
"gl",
"mr",
"pa",
"si",
"km",
"sn",
"yo",
"so",
"af",
"oc",
"ka",
"be",
"tg",
"sd",
"gu",
"am",
"yi",
"lo",
"uz",
"fo",
"ht",
"ps",
"tk",
"nn",
"mt",
"sa",
"lb",
"my",
"bo",
"tl",
"mg",
"as",
"tt",
"haw",
"ln",
"ha",
"ba",
"jw",
"su",
"arxiv:2212.04356",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-02-12T16:05:58Z | ---
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- no
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: whisper-small
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 3.432213777886737
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 7.628304527060248
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: hi
split: test
args:
language: hi
metrics:
- name: Test WER
type: wer
value: 87.3
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13.0
type: mozilla-foundation/common_voice_13_0
config: dv
split: test
args:
language: dv
metrics:
- name: Wer
type: wer
value: 125.69809089960707
pipeline_tag: automatic-speech-recognition
license: apache-2.0
---
# Whisper
Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours
of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains **without** the need
for fine-tuning.
Whisper was proposed in the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://arxiv.org/abs/2212.04356)
by Alec Radford et al from OpenAI. The original code repository can be found [here](https://github.com/openai/whisper).
**Disclaimer**: Content for this model card has partly been written by the Hugging Face team, and parts of it were
copied and pasted from the original model card.
## Model details
Whisper is a Transformer based encoder-decoder model, also referred to as a _sequence-to-sequence_ model.
It was trained on 680k hours of labelled speech data annotated using large-scale weak supervision.
The models were trained on either English-only data or multilingual data. The English-only models were trained
on the task of speech recognition. The multilingual models were trained on both speech recognition and speech
translation. For speech recognition, the model predicts transcriptions in the *same* language as the audio.
For speech translation, the model predicts transcriptions to a *different* language to the audio.
Whisper checkpoints come in five configurations of varying model sizes.
The smallest four are trained on either English-only or multilingual data.
The largest checkpoints are multilingual only. All ten of the pre-trained checkpoints
are available on the [Hugging Face Hub](https://huggingface.co/models?search=openai/whisper). The
checkpoints are summarised in the following table with links to the models on the Hub:
| Size | Parameters | English-only | Multilingual |
|----------|------------|------------------------------------------------------|-----------------------------------------------------|
| tiny | 39 M | [✓](https://huggingface.co/openai/whisper-tiny.en) | [✓](https://huggingface.co/openai/whisper-tiny) |
| base | 74 M | [✓](https://huggingface.co/openai/whisper-base.en) | [✓](https://huggingface.co/openai/whisper-base) |
| small | 244 M | [✓](https://huggingface.co/openai/whisper-small.en) | [✓](https://huggingface.co/openai/whisper-small) |
| medium | 769 M | [✓](https://huggingface.co/openai/whisper-medium.en) | [✓](https://huggingface.co/openai/whisper-medium) |
| large | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large) |
| large-v2 | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large-v2) |
# Usage
To transcribe audio samples, the model has to be used alongside a [`WhisperProcessor`](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperProcessor).
The `WhisperProcessor` is used to:
1. Pre-process the audio inputs (converting them to log-Mel spectrograms for the model)
2. Post-process the model outputs (converting them from tokens to text)
The model is informed of which task to perform (transcription or translation) by passing the appropriate "context tokens". These context tokens
are a sequence of tokens that are given to the decoder at the start of the decoding process, and take the following order:
1. The transcription always starts with the `<|startoftranscript|>` token
2. The second token is the language token (e.g. `<|en|>` for English)
3. The third token is the "task token". It can take one of two values: `<|transcribe|>` for speech recognition or `<|translate|>` for speech translation
4. In addition, a `<|notimestamps|>` token is added if the model should not include timestamp prediction
Thus, a typical sequence of context tokens might look as follows:
```
<|startoftranscript|> <|en|> <|transcribe|> <|notimestamps|>
```
Which tells the model to decode in English, under the task of speech recognition, and not to predict timestamps.
These tokens can either be forced or un-forced. If they are forced, the model is made to predict each token at
each position. This allows one to control the output language and task for the Whisper model. If they are un-forced,
the Whisper model will automatically predict the output langauge and task itself.
The context tokens can be set accordingly:
```python
model.config.forced_decoder_ids = WhisperProcessor.get_decoder_prompt_ids(language="english", task="transcribe")
```
Which forces the model to predict in English under the task of speech recognition.
## Transcription
### English to English
In this example, the context tokens are 'unforced', meaning the model automatically predicts the output language
(English) and task (transcribe).
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-small")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small")
>>> model.config.forced_decoder_ids = None
>>> # load dummy dataset and read audio files
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> sample = ds[0]["audio"]
>>> input_features = processor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=False)
['<|startoftranscript|><|en|><|transcribe|><|notimestamps|> Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.<|endoftext|>']
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.']
```
The context tokens can be removed from the start of the transcription by setting `skip_special_tokens=True`.
### French to French
The following example demonstrates French to French transcription by setting the decoder ids appropriately.
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import Audio, load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-small")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small")
>>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="transcribe")
>>> # load streaming dataset and read first audio sample
>>> ds = load_dataset("common_voice", "fr", split="test", streaming=True)
>>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
>>> input_speech = next(iter(ds))["audio"]
>>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids)
['<|startoftranscript|><|fr|><|transcribe|><|notimestamps|> Un vrai travail intéressant va enfin être mené sur ce sujet.<|endoftext|>']
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' Un vrai travail intéressant va enfin être mené sur ce sujet.']
```
## Translation
Setting the task to "translate" forces the Whisper model to perform speech translation.
### French to English
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import Audio, load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-small")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small")
>>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="translate")
>>> # load streaming dataset and read first audio sample
>>> ds = load_dataset("common_voice", "fr", split="test", streaming=True)
>>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
>>> input_speech = next(iter(ds))["audio"]
>>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' A very interesting work, we will finally be given on this subject.']
```
## Evaluation
This code snippet shows how to evaluate Whisper Small on [LibriSpeech test-clean](https://huggingface.co/datasets/librispeech_asr):
```python
>>> from datasets import load_dataset
>>> from transformers import WhisperForConditionalGeneration, WhisperProcessor
>>> import torch
>>> from evaluate import load
>>> librispeech_test_clean = load_dataset("librispeech_asr", "clean", split="test")
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-small")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small").to("cuda")
>>> def map_to_pred(batch):
>>> audio = batch["audio"]
>>> input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features
>>> batch["reference"] = processor.tokenizer._normalize(batch['text'])
>>>
>>> with torch.no_grad():
>>> predicted_ids = model.generate(input_features.to("cuda"))[0]
>>> transcription = processor.decode(predicted_ids)
>>> batch["prediction"] = processor.tokenizer._normalize(transcription)
>>> return batch
>>> result = librispeech_test_clean.map(map_to_pred)
>>> wer = load("wer")
>>> print(100 * wer.compute(references=result["reference"], predictions=result["prediction"]))
3.432213777886737
```
## Long-Form Transcription
The Whisper model is intrinsically designed to work on audio samples of up to 30s in duration. However, by using a chunking
algorithm, it can be used to transcribe audio samples of up to arbitrary length. This is possible through Transformers
[`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline)
method. Chunking is enabled by setting `chunk_length_s=30` when instantiating the pipeline. With chunking enabled, the pipeline
can be run with batched inference. It can also be extended to predict sequence level timestamps by passing `return_timestamps=True`:
```python
>>> import torch
>>> from transformers import pipeline
>>> from datasets import load_dataset
>>> device = "cuda:0" if torch.cuda.is_available() else "cpu"
>>> pipe = pipeline(
>>> "automatic-speech-recognition",
>>> model="openai/whisper-small",
>>> chunk_length_s=30,
>>> device=device,
>>> )
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> sample = ds[0]["audio"]
>>> prediction = pipe(sample.copy(), batch_size=8)["text"]
" Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel."
>>> # we can also return timestamps for the predictions
>>> prediction = pipe(sample.copy(), batch_size=8, return_timestamps=True)["chunks"]
[{'text': ' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.',
'timestamp': (0.0, 5.44)}]
```
Refer to the blog post [ASR Chunking](https://huggingface.co/blog/asr-chunking) for more details on the chunking algorithm.
## Fine-Tuning
The pre-trained Whisper model demonstrates a strong ability to generalise to different datasets and domains. However,
its predictive capabilities can be improved further for certain languages and tasks through *fine-tuning*. The blog
post [Fine-Tune Whisper with 🤗 Transformers](https://huggingface.co/blog/fine-tune-whisper) provides a step-by-step
guide to fine-tuning the Whisper model with as little as 5 hours of labelled data.
### Evaluated Use
The primary intended users of these models are AI researchers studying robustness, generalization, capabilities, biases, and constraints of the current model. However, Whisper is also potentially quite useful as an ASR solution for developers, especially for English speech recognition. We recognize that once models are released, it is impossible to restrict access to only “intended” uses or to draw reasonable guidelines around what is or is not research.
The models are primarily trained and evaluated on ASR and speech translation to English tasks. They show strong ASR results in ~10 languages. They may exhibit additional capabilities, particularly if fine-tuned on certain tasks like voice activity detection, speaker classification, or speaker diarization but have not been robustly evaluated in these areas. We strongly recommend that users perform robust evaluations of the models in a particular context and domain before deploying them.
In particular, we caution against using Whisper models to transcribe recordings of individuals taken without their consent or purporting to use these models for any kind of subjective classification. We recommend against use in high-risk domains like decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes. The models are intended to transcribe and translate speech, use of the model for classification is not only not evaluated but also not appropriate, particularly to infer human attributes.
## Training Data
The models are trained on 680,000 hours of audio and the corresponding transcripts collected from the internet. 65% of this data (or 438,000 hours) represents English-language audio and matched English transcripts, roughly 18% (or 126,000 hours) represents non-English audio and English transcripts, while the final 17% (or 117,000 hours) represents non-English audio and the corresponding transcript. This non-English data represents 98 different languages.
As discussed in [the accompanying paper](https://cdn.openai.com/papers/whisper.pdf), we see that performance on transcription in a given language is directly correlated with the amount of training data we employ in that language.
## Performance and Limitations
Our studies show that, over many existing ASR systems, the models exhibit improved robustness to accents, background noise, technical language, as well as zero shot translation from multiple languages into English; and that accuracy on speech recognition and translation is near the state-of-the-art level.
However, because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself.
Our models perform unevenly across languages, and we observe lower accuracy on low-resource and/or low-discoverability languages or languages where we have less training data. The models also exhibit disparate performance on different accents and dialects of particular languages, which may include higher word error rate across speakers of different genders, races, ages, or other demographic criteria. Our full evaluation results are presented in [the paper accompanying this release](https://cdn.openai.com/papers/whisper.pdf).
In addition, the sequence-to-sequence architecture of the model makes it prone to generating repetitive texts, which can be mitigated to some degree by beam search and temperature scheduling but not perfectly. Further analysis on these limitations are provided in [the paper](https://cdn.openai.com/papers/whisper.pdf). It is likely that this behavior and hallucinations may be worse on lower-resource and/or lower-discoverability languages.
## Broader Implications
We anticipate that Whisper models’ transcription capabilities may be used for improving accessibility tools. While Whisper models cannot be used for real-time transcription out of the box – their speed and size suggest that others may be able to build applications on top of them that allow for near-real-time speech recognition and translation. The real value of beneficial applications built on top of Whisper models suggests that the disparate performance of these models may have real economic implications.
There are also potential dual use concerns that come with releasing Whisper. While we hope the technology will be used primarily for beneficial purposes, making ASR technology more accessible could enable more actors to build capable surveillance technologies or scale up existing surveillance efforts, as the speed and accuracy allow for affordable automatic transcription and translation of large volumes of audio communication. Moreover, these models may have some capabilities to recognize specific individuals out of the box, which in turn presents safety concerns related both to dual use and disparate performance. In practice, we expect that the cost of transcription is not the limiting factor of scaling up surveillance projects.
### BibTeX entry and citation info
```bibtex
@misc{radford2022whisper,
doi = {10.48550/ARXIV.2212.04356},
url = {https://arxiv.org/abs/2212.04356},
author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
title = {Robust Speech Recognition via Large-Scale Weak Supervision},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
|
MarkelTaichi/ppo-LunarLander-v2 | MarkelTaichi | 2024-02-12T16:05:22Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-12T15:31:24Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 258.12 +/- 14.11
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
furrutiav/bert_qa_extractor_cockatiel_2022_z_value_over_subsample_it_727 | furrutiav | 2024-02-12T15:52:27Z | 91 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-02-12T15:51:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hdeldar/distilbert-base-uncased-finetuned-cola | hdeldar | 2024-02-12T15:51:58Z | 46 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-12T15:47:40Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: hdeldar/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# hdeldar/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1972
- Validation Loss: 0.5241
- Train Matthews Correlation: 0.5294
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5250 | 0.4718 | 0.4527 | 0 |
| 0.3234 | 0.4414 | 0.5235 | 1 |
| 0.1972 | 0.5241 | 0.5294 | 2 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.17.0
- Tokenizers 0.15.1
|
gayanin/bart-with-pubmed-noise-data-0.1-v2 | gayanin | 2024-02-12T15:51:34Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-12T15:18:34Z | ---
license: apache-2.0
base_model: facebook/bart-base
tags:
- generated_from_trainer
model-index:
- name: bart-with-pubmed-noise-data-0.1-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-with-pubmed-noise-data-0.1-v2
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2115
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.4161 | 0.11 | 500 | 0.3441 |
| 0.342 | 0.21 | 1000 | 0.3091 |
| 0.2694 | 0.32 | 1500 | 0.2969 |
| 0.3792 | 0.43 | 2000 | 0.2712 |
| 0.3219 | 0.54 | 2500 | 0.2601 |
| 0.3001 | 0.64 | 3000 | 0.2574 |
| 0.2606 | 0.75 | 3500 | 0.2489 |
| 0.2716 | 0.86 | 4000 | 0.2415 |
| 0.2714 | 0.96 | 4500 | 0.2382 |
| 0.2072 | 1.07 | 5000 | 0.2429 |
| 0.2111 | 1.18 | 5500 | 0.2377 |
| 0.1977 | 1.28 | 6000 | 0.2455 |
| 0.2171 | 1.39 | 6500 | 0.2309 |
| 0.1853 | 1.5 | 7000 | 0.2314 |
| 0.2436 | 1.61 | 7500 | 0.2269 |
| 0.171 | 1.71 | 8000 | 0.2220 |
| 0.2032 | 1.82 | 8500 | 0.2226 |
| 0.2028 | 1.93 | 9000 | 0.2175 |
| 0.1448 | 2.03 | 9500 | 0.2227 |
| 0.1447 | 2.14 | 10000 | 0.2216 |
| 0.1516 | 2.25 | 10500 | 0.2200 |
| 0.1294 | 2.35 | 11000 | 0.2197 |
| 0.1569 | 2.46 | 11500 | 0.2157 |
| 0.1505 | 2.57 | 12000 | 0.2160 |
| 0.152 | 2.68 | 12500 | 0.2151 |
| 0.1588 | 2.78 | 13000 | 0.2117 |
| 0.1451 | 2.89 | 13500 | 0.2134 |
| 0.1644 | 3.0 | 14000 | 0.2115 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.2+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
|
gayanin/bart-with-woz-noise-data-0.1-v2 | gayanin | 2024-02-12T15:49:37Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-12T15:21:35Z | ---
license: apache-2.0
base_model: facebook/bart-base
tags:
- generated_from_trainer
model-index:
- name: bart-with-woz-noise-data-0.1-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-with-woz-noise-data-0.1-v2
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0845
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.2188 | 0.13 | 500 | 0.1794 |
| 0.1741 | 0.26 | 1000 | 0.1518 |
| 0.1631 | 0.39 | 1500 | 0.1327 |
| 0.1318 | 0.53 | 2000 | 0.1272 |
| 0.1238 | 0.66 | 2500 | 0.1168 |
| 0.1451 | 0.79 | 3000 | 0.1103 |
| 0.1166 | 0.92 | 3500 | 0.1068 |
| 0.0833 | 1.05 | 4000 | 0.1054 |
| 0.1029 | 1.18 | 4500 | 0.1017 |
| 0.1174 | 1.31 | 5000 | 0.0971 |
| 0.0786 | 1.44 | 5500 | 0.0956 |
| 0.1184 | 1.58 | 6000 | 0.0951 |
| 0.0984 | 1.71 | 6500 | 0.0926 |
| 0.0959 | 1.84 | 7000 | 0.0893 |
| 0.093 | 1.97 | 7500 | 0.0893 |
| 0.0783 | 2.1 | 8000 | 0.0910 |
| 0.0678 | 2.23 | 8500 | 0.0927 |
| 0.0756 | 2.36 | 9000 | 0.0889 |
| 0.0684 | 2.5 | 9500 | 0.0877 |
| 0.0573 | 2.63 | 10000 | 0.0872 |
| 0.0544 | 2.76 | 10500 | 0.0855 |
| 0.0579 | 2.89 | 11000 | 0.0845 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.2+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
|
Flamoverse/merged_model | Flamoverse | 2024-02-12T15:45:40Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
] | null | 2024-02-12T15:44:48Z | ---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 |
Zaphare/ppo-LunarLander-v2 | Zaphare | 2024-02-12T15:41:00Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-12T13:55:36Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 280.09 +/- 14.77
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
kidyu/Moza-7B-v1.0-GGUF | kidyu | 2024-02-12T15:40:24Z | 37 | 1 | null | [
"gguf",
"mergekit",
"merge",
"base_model:kidyu/Moza-7B-v1.0",
"base_model:quantized:kidyu/Moza-7B-v1.0",
"region:us"
] | null | 2024-02-12T13:37:03Z | ---
base_model: kidyu/Moza-7B-v1.0
inference: false
quantized_by: kidyu
tags:
- mergekit
- merge
---
Quantized GGUF of my meme-merge [Moza-7B-v1.0](https://huggingface.co/kidyu/Moza-7B-v1.0/) |
not-lain/MyRepo1.0 | not-lain | 2024-02-12T15:34:50Z | 194 | 0 | transformers | [
"transformers",
"safetensors",
"MobileNetV1",
"image-classification",
"custom_code",
"autotrain_compatible",
"region:us"
] | image-classification | 2024-02-12T15:33:46Z |
---
tags:
- custom_code
---
# How to use
you can the model via the command
```python
from transformers import AutoModelForImageClassification
model = AutoModelForImageClassification.from_pretrained("not-lain/MyRepo1.0", trust_remote_code=True)
```
or you can use the pipeline
```python
from transformers import pipeline
pipe = pipeline(model="not-lain/MyRepo1.0", trust_remote_code=True)
pipe(
"url",
download=True, # will call the download_img method
clean_output=False # will be passed as postprocess_kwargs
)
```
# parameters
the pipeline supports the following parameters :
* download
* clean_output
you can also use the following method to download images from the web
```python
pipe.download_img(url)
```
|
ppsingh/iki_target_setfit | ppsingh | 2024-02-12T15:24:33Z | 57 | 0 | setfit | [
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:GIZ/TAPP-multilabel-mpnet",
"base_model:finetune:GIZ/TAPP-multilabel-mpnet",
"co2_eq_emissions",
"region:us"
] | text-classification | 2024-02-11T18:11:00Z | ---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- accuracy
widget:
- text: During 2021-2030, Thailand s LEDS will be implemented through the NDC roadmap
and sectoral action plans which provide detailed guidance on measures and realistic
actions to achieve the 1st NDC target by 2030, as well as regular monitoring and
evaluation of the progress and achievement. The monitoring and evaluation of the
mitigation measures relating to the Thailand’s LEDS will be carried out to ensure
its effectiveness and efficiency in achieving its objectives and key performance
indicators. Because it is a long-term plan spanning many years during which many
changes can occur, it is envisaged that it will be subject to a comprehensive
review every five years. This is consistent with the approach under the Paris
Agreement that assigned Parties to submit their NDCs to the UNFCCC every five
year.
- text: The NDC also benefited from the reviews and comments of these implementing
partners as well as local and international experts. Special thanks to The Honourable
Molwyn Joseph, Minister for Health, Wellness and the Environment, for his unwavering
commitment to advance this ambitious climate change agenda, while Antigua and
Barbuda faced an outbreak of the COVID-19 pandemic. Significant contributions
to the process were made by a wide-cross section of stakeholders from the public
and private sector, civil society, trade and industry groups and training institutions,
who attended NDC-related workshops, consultations and participated in key stakeholder
interviews organized to inform the NDC update.
- text: Antigua and Barbuda will mainstream gender in its energy planning through
an Inclusive Renewable Energy Strategy. This strategy will recognize and acknowledge,
among other things, the gender norms, and inequalities prevalent in the energy
sector, women and men’s differentiated access to energy, their different energy
needs and preferences, and different impacts that energy access could have on
their livelihoods. Antigua and Barbuda’s plan for an inclusive renewable energy
transition will ensure continued affordable and reliable access to electricity
and other energy services for all.
- text: 'Thailand’s climate actions are divided into short-term, medium-term and long-term
targets up to 2050. For the mitigation actions, short-term targets include: (i)
develop medium- and long-term GHG emission reduction targets and prepare roadmaps
for the implementation by sector, including the GHG emission reduction target
on a voluntary basis (pre-2020 target), Nationally Appropriate Mitigation Actions
(NAMAs) roadmaps, and measurement, reporting, and verification mechanisms, (ii)
establish domestic incentive mechanisms to encourage low carbon development. The
medium-term targets include: (i) reduce GHG emissions from energy and transport
sectors by 7-20% against BAU level by 2020, subject to the level of international
support, (ii) supply at least 25% of energy consumption from renewable energy
sources by 2021 and (iii) increase the ratio of municipalities with more than
10 m2 of green space per capita.'
- text: In the oil sector, the country has benefited from 372 million dollars for
the reduction of gas flaring at the initiative (GGFR - "Global Gas Flaring Reduction")
of the World Bank after having adopted in November 2015 a national reduction plan
flaring and associated gas upgrading. In the electricity sector, the NDC highlights
the development of hydroelectricity which should make it possible to cover 80%
of production in 2025, the remaining 20% ​​being
covered by gas and other renewable energies.
pipeline_tag: text-classification
inference: true
co2_eq_emissions:
emissions: 5.901369050433577
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: Intel(R) Xeon(R) CPU @ 2.00GHz
ram_total_size: 12.674789428710938
hours_used: 0.185
hardware_used: 1 x Tesla T4
base_model: ppsingh/TAPP-multilabel-mpnet
---
# SetFit with ppsingh/TAPP-multilabel-mpnet
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [ppsingh/TAPP-multilabel-mpnet](https://huggingface.co/ppsingh/TAPP-multilabel-mpnet) as the Sentence Transformer embedding model. A [SetFitHead](huggingface.co/docs/setfit/reference/main#setfit.SetFitHead) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [ppsingh/TAPP-multilabel-mpnet](https://huggingface.co/ppsingh/TAPP-multilabel-mpnet)
- **Classification head:** a [SetFitHead](huggingface.co/docs/setfit/reference/main#setfit.SetFitHead) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:---------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| NEGATIVE | <ul><li>'(p 70-1).Antigua and Barbuda’s 2021 update to the first Nationally Determined Contribution the most vulnerable in society have been predominantly focused on adaptation measures like building resilience to flooding and hurricanes. The updated NDC ambition provides an opportunity to focus more intently on enabling access to energy efficiency and renewable energy for the most vulnerable, particularly women who are most affected when electricity is not available since the grid is down after an extreme weather event. Nationally, Antigua and Barbuda intends to utilize the SIRF Fund as a mechanism primarily to catalyse and leverage investment in the transition for NGOs, MSMEs and informal sectors that normally cannot access traditional local commercial financing due to perceived high risks.'</li><li>'The transport system cost will be increased by 16.2% compared to the BAU level. Electric trucks and electric pick-ups will account for the highest share of investment followed by electric buses and trucks. In the manufacturing industries, the energy efficiency improvement in the heating and the motor systems and the deployment of CCS require the highest investment in the non-metallic and the chemical industries in 2050. The manufacturing industries system cost will be increased by 15.3% compared to the BAU level.'</li><li>'Figure 1-9: Total GHG emissions by sector (excluding LULUCF) 2000 and 2016 1.2.2 Greenhouse Gas Emission by Sector • Energy Total direct GHG emissions from the Energy sector in 2016 were estimated to be 253,895.61 eq. The majority of GHG emissions in the Energy sector were generated by fuel combustion, consisting mostly of grid-connected electricity and heat production at around eq (42.84%). GHG emissions from Transport, Manufacturing Industries and Construction, and other sectors were 68,260.17 GgCO2 eq eq (6.10%), respectively. Fugitive Emissions from fuel eq or a little over 4.33% of total GHG emissions from the Energy sector. Details of GHG emissions in the Energy sector by gas type and source in 2016 are presented in Figure 1-10. Source: Thailand Third Biennial Update Report, UNFCCC 2020.'</li></ul> |
| TARGET | <ul><li>'DNPM, NFA,. Cocoa. Board,. Spice Board,. Provincial. gov-ernments. in the. Momase. region. Ongoing -. 2025. 340. European Union. Support committed. Priority Sector: Health. By 2030, 100% of the population benefit from introduced health measures to respond to malaria and other climate-sensitive diseases in PNG. Action or Activity. Indicator. Status. Lead. Implementing. Agencies. Supporting. Agencies. Time Frame. Budget (USD). Funding Source. (Existing/Potential). Other Support. Improve vector control. measures, with a priority. of all households having. access to a long-lasting. insecticidal net (LLIN).'</li><li>'Conditionality: With national effort it is intended to increase the attention to vulnerable groups in case of disasters and/or emergencies up to 50% of the target and 100% of the target with international cooperation. Description: In this goal, it is projected to increase coverage from 33% to 50% (211,000 families) of agricultural insurance in attention to the number of families, whose crops were affected by various adverse weather events (flood, drought, frost, hailstorm, among others), in addition to the implementation of comprehensive actions for risk management and adaptation to Climate Change.'</li><li>'By 2030, upgrade watershed health and vitality in at least 20 districts to a higher condition category. By 2030, create an inventory of wetlands in Nepal and sustainably manage vulnerable wetlands. By 2025, enhance the sink capacity of the landuse sector by instituting the Forest Development Fund (FDF) for compensation of plantations and forest restoration. Increase growing stock including Mean Annual Increment in Tarai, Hills and Mountains. Afforest/reforest viable public and private lands, including agroforestry.'</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("ppsingh/iki_target_setfit")
# Run inference
preds = model("In the oil sector, the country has benefited from 372 million dollars for the reduction of gas flaring at the initiative (GGFR - \"Global Gas Flaring Reduction\") of the World Bank after having adopted in November 2015 a national reduction plan flaring and associated gas upgrading. In the electricity sector, the NDC highlights the development of hydroelectricity which should make it possible to cover 80% of production in 2025, the remaining 20% ​​being covered by gas and other renewable energies.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:---------|:----|
| Word count | 58 | 116.6632 | 508 |
| Label | Training Sample Count |
|:---------|:----------------------|
| NEGATIVE | 51 |
| TARGET | 44 |
### Training Hyperparameters
- batch_size: (8, 2)
- num_epochs: (1, 0)
- max_steps: -1
- sampling_strategy: undersampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0018 | 1 | 0.3343 | - |
| 0.1783 | 100 | 0.0026 | 0.1965 |
| 0.3565 | 200 | 0.0001 | 0.1995 |
| 0.5348 | 300 | 0.0001 | 0.2105 |
| 0.7130 | 400 | 0.0001 | 0.2153 |
| 0.8913 | 500 | 0.0 | 0.1927 |
### Training Results Classifier
- Classes Representation in Test Data: Target: 9, Negative: 8
- F1-score: 87.8%
- Accuracy: 88.2%
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Carbon Emitted**: 0.006 kg of CO2
- **Hours Used**: 0.185 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x Tesla T4
- **CPU Model**: Intel(R) Xeon(R) CPU @ 2.00GHz
- **RAM Size**: 12.67 GB
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.3.1
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.3.0
- Tokenizers: 0.15.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
BharatMata/my-dog | BharatMata | 2024-02-12T15:22:42Z | 0 | 0 | null | [
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-02-12T15:20:20Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My--Dog Dreambooth model trained by BharatMata following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: Roll-No.27
Sample pictures of this concept:

|
maramzarkaoui/openhermes | maramzarkaoui | 2024-02-12T15:08:14Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"autotrain",
"text-generation",
"conversational",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-08T11:26:35Z | ---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
sam1120/dropoff-utcustom-train-SF-RGB-b5_6 | sam1120 | 2024-02-12T14:57:46Z | 145 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2024-02-12T14:26:12Z | ---
license: other
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: dropoff-utcustom-train-SF-RGB-b5_6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dropoff-utcustom-train-SF-RGB-b5_6
This model is a fine-tuned version of [nvidia/mit-b5](https://huggingface.co/nvidia/mit-b5) on the sam1120/dropoff-utcustom-TRAIN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2315
- Mean Iou: 0.6980
- Mean Accuracy: 0.7503
- Overall Accuracy: 0.9714
- Accuracy Unlabeled: nan
- Accuracy Dropoff: 0.5091
- Accuracy Undropoff: 0.9915
- Iou Unlabeled: nan
- Iou Dropoff: 0.4253
- Iou Undropoff: 0.9708
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 120
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Dropoff | Accuracy Undropoff | Iou Unlabeled | Iou Dropoff | Iou Undropoff |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:----------------:|:------------------:|:-------------:|:-----------:|:-------------:|
| 1.0694 | 5.0 | 10 | 1.0190 | 0.2533 | 0.6371 | 0.6676 | nan | 0.6038 | 0.6703 | 0.0 | 0.0976 | 0.6624 |
| 0.8457 | 10.0 | 20 | 0.7681 | 0.4126 | 0.7662 | 0.9307 | nan | 0.5867 | 0.9457 | 0.0 | 0.3078 | 0.9300 |
| 0.6049 | 15.0 | 30 | 0.5718 | 0.4362 | 0.7527 | 0.9568 | nan | 0.5301 | 0.9753 | 0.0 | 0.3527 | 0.9561 |
| 0.5206 | 20.0 | 40 | 0.4181 | 0.4522 | 0.7468 | 0.9662 | nan | 0.5076 | 0.9861 | 0.0 | 0.3909 | 0.9656 |
| 0.3478 | 25.0 | 50 | 0.3144 | 0.4603 | 0.7376 | 0.9709 | nan | 0.4832 | 0.9920 | 0.0 | 0.4105 | 0.9705 |
| 0.2023 | 30.0 | 60 | 0.2893 | 0.4654 | 0.7612 | 0.9701 | nan | 0.5332 | 0.9891 | 0.0 | 0.4267 | 0.9695 |
| 0.1367 | 35.0 | 70 | 0.2351 | 0.6813 | 0.7176 | 0.9715 | nan | 0.4407 | 0.9946 | nan | 0.3916 | 0.9710 |
| 0.1272 | 40.0 | 80 | 0.2364 | 0.6824 | 0.7217 | 0.9713 | nan | 0.4495 | 0.9939 | nan | 0.3941 | 0.9707 |
| 0.0929 | 45.0 | 90 | 0.2536 | 0.4704 | 0.7617 | 0.9718 | nan | 0.5326 | 0.9909 | 0.0 | 0.4401 | 0.9712 |
| 0.0756 | 50.0 | 100 | 0.2253 | 0.6950 | 0.7479 | 0.9710 | nan | 0.5045 | 0.9912 | nan | 0.4197 | 0.9704 |
| 0.0756 | 55.0 | 110 | 0.2305 | 0.7043 | 0.7606 | 0.9716 | nan | 0.5305 | 0.9908 | nan | 0.4375 | 0.9710 |
| 0.0721 | 60.0 | 120 | 0.2213 | 0.6964 | 0.7448 | 0.9716 | nan | 0.4974 | 0.9922 | nan | 0.4218 | 0.9711 |
| 0.0683 | 65.0 | 130 | 0.2338 | 0.7047 | 0.7631 | 0.9715 | nan | 0.5359 | 0.9904 | nan | 0.4385 | 0.9708 |
| 0.0642 | 70.0 | 140 | 0.2314 | 0.7046 | 0.7637 | 0.9714 | nan | 0.5373 | 0.9902 | nan | 0.4385 | 0.9707 |
| 0.0623 | 75.0 | 150 | 0.2205 | 0.7013 | 0.7565 | 0.9714 | nan | 0.5222 | 0.9909 | nan | 0.4317 | 0.9708 |
| 0.0601 | 80.0 | 160 | 0.2209 | 0.6983 | 0.7496 | 0.9715 | nan | 0.5075 | 0.9917 | nan | 0.4257 | 0.9709 |
| 0.0557 | 85.0 | 170 | 0.2067 | 0.6982 | 0.7463 | 0.9719 | nan | 0.5003 | 0.9923 | nan | 0.4252 | 0.9713 |
| 0.0571 | 90.0 | 180 | 0.2354 | 0.7022 | 0.7603 | 0.9712 | nan | 0.5302 | 0.9904 | nan | 0.4339 | 0.9706 |
| 0.0544 | 95.0 | 190 | 0.2240 | 0.7010 | 0.7562 | 0.9714 | nan | 0.5215 | 0.9909 | nan | 0.4311 | 0.9708 |
| 0.0553 | 100.0 | 200 | 0.2204 | 0.6968 | 0.7454 | 0.9717 | nan | 0.4987 | 0.9922 | nan | 0.4225 | 0.9711 |
| 0.0525 | 105.0 | 210 | 0.2332 | 0.7050 | 0.7625 | 0.9716 | nan | 0.5344 | 0.9906 | nan | 0.4390 | 0.9710 |
| 0.0524 | 110.0 | 220 | 0.2371 | 0.7033 | 0.7605 | 0.9715 | nan | 0.5304 | 0.9906 | nan | 0.4359 | 0.9708 |
| 0.0513 | 115.0 | 230 | 0.2333 | 0.6987 | 0.7519 | 0.9714 | nan | 0.5125 | 0.9913 | nan | 0.4267 | 0.9707 |
| 0.0537 | 120.0 | 240 | 0.2315 | 0.6980 | 0.7503 | 0.9714 | nan | 0.5091 | 0.9915 | nan | 0.4253 | 0.9708 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Kavin0211/results | Kavin0211 | 2024-02-12T14:54:59Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-02-12T14:54:51Z | ---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: microsoft/phi-2
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1 |
jaCappella/XUMX_jaCappella_VES_48k | jaCappella | 2024-02-12T14:54:38Z | 0 | 0 | null | [
"music",
"speech",
"audio",
"audio-to-audio",
"a cappella",
"vocal ensemble",
"ja",
"dataset:jaCappella",
"arxiv:2211.16028",
"license:cc-by-nc-4.0",
"region:us"
] | audio-to-audio | 2023-01-21T06:25:19Z | ---
license: cc-by-nc-4.0
language:
- ja
tags:
- music
- speech
- audio
- audio-to-audio
- a cappella
- vocal ensemble
datasets:
- jaCappella
metrics:
- SI-SDR
---
# X-UMX trained with the jaCappella corpus for vocal ensemble separation
This model was trained by Tomohiko Nakamura using [the codebase](https://github.com/TomohikoNakamura/asteroid_jaCappella)).
It was trained on the vocal ensemble separation task of [the jaCappella dataset](https://tomohikonakamura.github.io/jaCappella_corpus/).
[The paper](https://doi.org/10.1109/ICASSP49357.2023.10095569) was published in ICASSP 2023 ([arXiv](https://arxiv.org/abs/2211.16028)).
# License
See [the jaCappella dataset page](https://tomohikonakamura.github.io/jaCappella_corpus/).
# Citation
See [the jaCappella dataset page](https://tomohikonakamura.github.io/jaCappella_corpus/).
# Configuration
```yaml
data:
num_workers: 12
sample_rate: 48000
samples_per_track: 13
seed: 42
seq_dur: 6.0
source_augmentations:
- gain
sources:
- vocal_percussion
- bass
- alto
- tenor
- soprano
- lead_vocal
model:
bandwidth: 16000
bidirectional: true
hidden_size: 512
in_chan: 4096
nb_channels: 1
nhop: 1024
pretrained: null
spec_power: 1
window_length: 4096
optim:
lr: 0.001
lr_decay_gamma: 0.3
lr_decay_patience: 80
optimizer: adam
patience: 1000
weight_decay: 1.0e-05
training:
batch_size: 16
epochs: 1000
loss_combine_sources: true
loss_use_multidomain: true
mix_coef: 10.0
val_dur: 80.0
```
# Results (SI-SDR [dB]) on vocal ensemble separation
| Method | Lead vocal | Soprano | Alto | Tenor | Bass |Vocal percussion|
|:---------------:|:--------------:|:--------------:|:--------------:|:--------------:|:--------------:|:--------------:|
| X-UMX | 7.5 | 10.7 | 13.5 | 10.2 | 9.1 | 21.0 | |
pgajo/mbert-xlwa-en-it_EW-TT-PE_U1_S0_DROP1_mbert_E8_DEV98.0 | pgajo | 2024-02-12T14:51:50Z | 94 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-02-12T14:50:50Z | ---
{}
---
Model description:
Model: pgajo/mbert-xlwa-en-it
Dataset: TASTEset
Unshuffled ratio: ['1']
Shuffled ratio: ['0']
Best exact match epoch: 8
Best exact match: 98.07
Best epoch: 8
Drop duplicates: ['1']
Max epochs = 10
Optimizer lr = 3e-05
Optimizer eps = 1e-08
Batch size = 32
Dataset path = pgajo/EW-TT-PE_U1_S0_DROP1_mbert
Results
| epoch | train_loss | train_f1 | train_exact | dev_loss | dev_f1 | dev_exact | test_loss | test_f1 | test_exact |
|--------:|-------------:|-----------:|--------------:|-----------:|---------:|------------:|------------:|----------:|-------------:|
| 1 | 0.42 | 88.03 | 77.33 | 0.08 | 97.54 | 95.58 | 0 | 0 | 0 |
| 2 | 0.05 | 99.22 | 97.72 | 0.05 | 98.33 | 97.24 | 0 | 0 | 0 |
| 3 | 0.02 | 99.66 | 99.1 | 0.07 | 98.37 | 96.69 | 0 | 0 | 0 |
| 4 | 0.02 | 99.61 | 99.1 | 0.06 | 98.43 | 96.96 | 0 | 0 | 0 |
| 5 | 0.01 | 99.69 | 99.31 | 0.05 | 98.72 | 97.51 | 0 | 0 | 0 |
| 6 | 0.01 | 99.75 | 99.38 | 0.03 | 98.62 | 97.24 | 0 | 0 | 0 |
| 7 | 0.01 | 99.97 | 99.86 | 0.04 | 98.83 | 97.79 | 0 | 0 | 0 |
| 8 | 0 | 99.91 | 99.86 | 0.04 | 98.98 | 98.07 | 0 | 0 | 0 |
| 9 | 0 | 99.88 | 99.79 | 0.03 | 99.22 | 98.07 | 0 | 0 | 0 |
| 10 | 0 | 99.88 | 99.72 | 0.05 | 98.84 | 97.51 | 0 | 0 | 0 | |
Shijia/furina_seed42_eng_amh_esp_roman | Shijia | 2024-02-12T14:51:27Z | 91 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:yihongLiu/furina",
"base_model:finetune:yihongLiu/furina",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-12T14:50:32Z | ---
base_model: yihongLiu/furina
tags:
- generated_from_trainer
model-index:
- name: furina_seed42_eng_amh_esp_roman
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# furina_seed42_eng_amh_esp_roman
This model is a fine-tuned version of [yihongLiu/furina](https://huggingface.co/yihongLiu/furina) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0144
- Spearman Corr: 0.8461
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Spearman Corr |
|:-------------:|:-----:|:----:|:---------------:|:-------------:|
| No log | 0.59 | 200 | 0.0299 | 0.6782 |
| No log | 1.18 | 400 | 0.0251 | 0.7278 |
| No log | 1.76 | 600 | 0.0202 | 0.7493 |
| 0.0425 | 2.35 | 800 | 0.0194 | 0.7584 |
| 0.0425 | 2.94 | 1000 | 0.0184 | 0.7737 |
| 0.0425 | 3.53 | 1200 | 0.0189 | 0.7734 |
| 0.0184 | 4.12 | 1400 | 0.0180 | 0.7906 |
| 0.0184 | 4.71 | 1600 | 0.0188 | 0.7909 |
| 0.0184 | 5.29 | 1800 | 0.0171 | 0.7971 |
| 0.0184 | 5.88 | 2000 | 0.0165 | 0.8055 |
| 0.0134 | 6.47 | 2200 | 0.0162 | 0.8059 |
| 0.0134 | 7.06 | 2400 | 0.0164 | 0.8085 |
| 0.0134 | 7.65 | 2600 | 0.0169 | 0.8131 |
| 0.0098 | 8.24 | 2800 | 0.0169 | 0.8171 |
| 0.0098 | 8.82 | 3000 | 0.0158 | 0.8169 |
| 0.0098 | 9.41 | 3200 | 0.0152 | 0.8201 |
| 0.0073 | 10.0 | 3400 | 0.0165 | 0.8197 |
| 0.0073 | 10.59 | 3600 | 0.0150 | 0.8234 |
| 0.0073 | 11.18 | 3800 | 0.0152 | 0.8284 |
| 0.0073 | 11.76 | 4000 | 0.0141 | 0.8338 |
| 0.0059 | 12.35 | 4200 | 0.0144 | 0.8315 |
| 0.0059 | 12.94 | 4400 | 0.0147 | 0.8348 |
| 0.0059 | 13.53 | 4600 | 0.0157 | 0.8327 |
| 0.0049 | 14.12 | 4800 | 0.0147 | 0.8379 |
| 0.0049 | 14.71 | 5000 | 0.0149 | 0.8365 |
| 0.0049 | 15.29 | 5200 | 0.0142 | 0.8360 |
| 0.0049 | 15.88 | 5400 | 0.0140 | 0.8409 |
| 0.0042 | 16.47 | 5600 | 0.0135 | 0.8414 |
| 0.0042 | 17.06 | 5800 | 0.0141 | 0.8410 |
| 0.0042 | 17.65 | 6000 | 0.0144 | 0.8402 |
| 0.0037 | 18.24 | 6200 | 0.0151 | 0.8435 |
| 0.0037 | 18.82 | 6400 | 0.0140 | 0.8431 |
| 0.0037 | 19.41 | 6600 | 0.0140 | 0.8454 |
| 0.0033 | 20.0 | 6800 | 0.0136 | 0.8453 |
| 0.0033 | 20.59 | 7000 | 0.0137 | 0.8446 |
| 0.0033 | 21.18 | 7200 | 0.0144 | 0.8461 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
|
Commandante/german-party-sentiment-bert-complete-synonyms-5e-5 | Commandante | 2024-02-12T14:45:39Z | 93 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:mdraw/german-news-sentiment-bert",
"base_model:finetune:mdraw/german-news-sentiment-bert",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-09T02:21:11Z | ---
base_model: mdraw/german-news-sentiment-bert
tags:
- generated_from_trainer
model-index:
- name: german-party-sentiment-bert-complete-synonyms-5e-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# german-party-sentiment-bert-complete-synonyms-5e-5
This model is a fine-tuned version of [mdraw/german-news-sentiment-bert](https://huggingface.co/mdraw/german-news-sentiment-bert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8769
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 20
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 120
- num_epochs: 14
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9596 | 1.0 | 70 | 0.9676 |
| 0.9122 | 2.0 | 140 | 0.8769 |
| 0.7382 | 3.0 | 210 | 0.9984 |
| 0.5708 | 4.0 | 280 | 1.1080 |
| 0.3579 | 5.0 | 350 | 1.4137 |
| 0.3066 | 6.0 | 420 | 1.8204 |
| 0.1716 | 7.0 | 490 | 1.8167 |
| 0.1974 | 8.0 | 560 | 2.1479 |
| 0.1164 | 9.0 | 630 | 2.3899 |
| 0.0878 | 10.0 | 700 | 2.5266 |
| 0.07 | 11.0 | 770 | 2.7014 |
| 0.0604 | 12.0 | 840 | 2.7048 |
| 0.0278 | 13.0 | 910 | 2.8119 |
| 0.0376 | 14.0 | 980 | 2.8799 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Tokenizers 0.15.1
|
Deepreneur/blue-lizard | Deepreneur | 2024-02-12T14:43:33Z | 7 | 13 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ja",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-05T16:29:47Z | ---
license: llama2
language:
- ja
---
# Deepreneur-blue-lizard
<!-- Provide a quick summary of what the model is/does. -->

## Model Description
<!-- Provide a longer summary of what this model is. -->
Deepreneur-blue-lizardは、MetaのLlama-2-7bに対して、Wikipediaや書籍等の日本語の学習データを用いて追加事前学習と独自データによるファインチューニングを実施したモデルです。
70億パラメータと非常に軽量なモデルであるにも関わらず、JGLUE(日本語タスクにおける評価ベンチマーク)を用いた評価では、ChatGPT-3.5を超えるスコアが算出されており、公開されている日本語モデルの中では最高性能になります。
※ 学習データにはJGLUEのデータは使用しておりません。また、ChatGPT等の出力は学習データに使用しておりません。
## How to use
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
B_INST, E_INST = "[INST]", "[/INST]"
B_SYS, E_SYS = "<<SYS>>\n", "\n<</SYS>>\n\n"
DEFAULT_SYSTEM_PROMPT = "あなたは誠実で優秀な日本人のアシスタントです。"
text = "deepreneurについて教えて"
model_name = "Deepreneur/blue-lizard"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
trust_remote_code=True,
torch_dtype=torch.bfloat16,
)
if torch.cuda.is_available():
model = model.to("cuda")
prompt = "{bos_token}{b_inst} {system}{prompt} {e_inst}".format(
bos_token=tokenizer.bos_token,
b_inst=B_INST,
system=f"{B_SYS}{DEFAULT_SYSTEM_PROMPT}{E_SYS}",
prompt=text,
e_inst=E_INST,
)
with torch.no_grad():
token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
output_ids = model.generate(
token_ids.to(model.device),
max_new_tokens=256,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
)
output = tokenizer.decode(output_ids.tolist()[0][token_ids.size(1) :], skip_special_tokens=True)
print(output)
"""
株式会社Deepreneurは、言語系の生成AIに強みを持ったAIスタートアップです。
東京大学松尾研究室発AIスタートアップに認定されており、大規模言語モデル(Large Language Model)の開発をはじめとする基礎研究や、企業との共同研究を通じてDXを推進します。
Deepreneurのホームページ: https://www.deepreneur.com/
Deepreneurのメールアドレス: [email protected]
"""
```
## Developers
以下アルファベット順
- Ikuto Watanabe
- Sunwoo Park
- Taiki Kaneki
- Yuki Hirota
- Yuki Koshiba
- Yusuke Kanzaki
- Yuta Sawada
## Licence
Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved. |
sam1120/dropoff-utcustom-train-SF-RGB-b5_2 | sam1120 | 2024-02-12T14:41:07Z | 151 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2024-02-12T14:24:47Z | ---
license: other
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: dropoff-utcustom-train-SF-RGB-b5_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dropoff-utcustom-train-SF-RGB-b5_2
This model is a fine-tuned version of [nvidia/mit-b5](https://huggingface.co/nvidia/mit-b5) on the sam1120/dropoff-utcustom-TRAIN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4848
- Mean Iou: 0.4257
- Mean Accuracy: 0.7972
- Overall Accuracy: 0.9466
- Accuracy Unlabeled: nan
- Accuracy Dropoff: 0.6343
- Accuracy Undropoff: 0.9601
- Iou Unlabeled: 0.0
- Iou Dropoff: 0.3321
- Iou Undropoff: 0.9451
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 120
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Dropoff | Accuracy Undropoff | Iou Unlabeled | Iou Dropoff | Iou Undropoff |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:----------------:|:------------------:|:-------------:|:-----------:|:-------------:|
| 1.0108 | 5.0 | 10 | 1.0721 | 0.1514 | 0.5401 | 0.4205 | nan | 0.6706 | 0.4096 | 0.0 | 0.0494 | 0.4047 |
| 0.9654 | 10.0 | 20 | 0.9802 | 0.2190 | 0.6570 | 0.5944 | nan | 0.7253 | 0.5887 | 0.0 | 0.0745 | 0.5826 |
| 0.9175 | 15.0 | 30 | 0.9047 | 0.2553 | 0.7350 | 0.6792 | nan | 0.7960 | 0.6741 | 0.0 | 0.0973 | 0.6686 |
| 0.9052 | 20.0 | 40 | 0.8427 | 0.2812 | 0.7661 | 0.7377 | nan | 0.7971 | 0.7351 | 0.0 | 0.1146 | 0.7290 |
| 0.8555 | 25.0 | 50 | 0.7970 | 0.3063 | 0.7827 | 0.7900 | nan | 0.7748 | 0.7906 | 0.0 | 0.1357 | 0.7832 |
| 0.8291 | 30.0 | 60 | 0.7543 | 0.3289 | 0.7891 | 0.8332 | nan | 0.7410 | 0.8372 | 0.0 | 0.1586 | 0.8282 |
| 0.7923 | 35.0 | 70 | 0.7327 | 0.3375 | 0.7961 | 0.8471 | nan | 0.7405 | 0.8517 | 0.0 | 0.1701 | 0.8425 |
| 0.7724 | 40.0 | 80 | 0.6994 | 0.3529 | 0.7968 | 0.8719 | nan | 0.7149 | 0.8787 | 0.0 | 0.1906 | 0.8682 |
| 0.7215 | 45.0 | 90 | 0.6675 | 0.3694 | 0.7935 | 0.8954 | nan | 0.6824 | 0.9047 | 0.0 | 0.2157 | 0.8926 |
| 0.6907 | 50.0 | 100 | 0.6521 | 0.3742 | 0.7998 | 0.9000 | nan | 0.6904 | 0.9091 | 0.0 | 0.2252 | 0.8973 |
| 0.6768 | 55.0 | 110 | 0.6260 | 0.3850 | 0.8022 | 0.9118 | nan | 0.6827 | 0.9217 | 0.0 | 0.2455 | 0.9094 |
| 0.659 | 60.0 | 120 | 0.6010 | 0.3965 | 0.7973 | 0.9244 | nan | 0.6586 | 0.9359 | 0.0 | 0.2671 | 0.9224 |
| 0.6265 | 65.0 | 130 | 0.5847 | 0.4005 | 0.7992 | 0.9276 | nan | 0.6592 | 0.9393 | 0.0 | 0.2757 | 0.9258 |
| 0.6134 | 70.0 | 140 | 0.5673 | 0.4060 | 0.8022 | 0.9316 | nan | 0.6611 | 0.9433 | 0.0 | 0.2881 | 0.9297 |
| 0.5864 | 75.0 | 150 | 0.5401 | 0.4132 | 0.7961 | 0.9383 | nan | 0.6410 | 0.9511 | 0.0 | 0.3029 | 0.9366 |
| 0.5686 | 80.0 | 160 | 0.5289 | 0.4153 | 0.7974 | 0.9395 | nan | 0.6424 | 0.9524 | 0.0 | 0.3080 | 0.9379 |
| 0.5597 | 85.0 | 170 | 0.5386 | 0.4114 | 0.8079 | 0.9350 | nan | 0.6692 | 0.9465 | 0.0 | 0.3011 | 0.9331 |
| 0.5718 | 90.0 | 180 | 0.5080 | 0.4210 | 0.7947 | 0.9438 | nan | 0.6321 | 0.9573 | 0.0 | 0.3208 | 0.9423 |
| 0.517 | 95.0 | 190 | 0.5026 | 0.4222 | 0.7956 | 0.9445 | nan | 0.6332 | 0.9580 | 0.0 | 0.3236 | 0.9430 |
| 0.5252 | 100.0 | 200 | 0.4990 | 0.4232 | 0.7969 | 0.9450 | nan | 0.6354 | 0.9584 | 0.0 | 0.3261 | 0.9435 |
| 0.5174 | 105.0 | 210 | 0.4951 | 0.4223 | 0.8012 | 0.9437 | nan | 0.6457 | 0.9567 | 0.0 | 0.3249 | 0.9422 |
| 0.5217 | 110.0 | 220 | 0.4882 | 0.4238 | 0.7993 | 0.9450 | nan | 0.6404 | 0.9582 | 0.0 | 0.3280 | 0.9435 |
| 0.5224 | 115.0 | 230 | 0.4846 | 0.4258 | 0.7968 | 0.9467 | nan | 0.6333 | 0.9603 | 0.0 | 0.3321 | 0.9452 |
| 0.5399 | 120.0 | 240 | 0.4848 | 0.4257 | 0.7972 | 0.9466 | nan | 0.6343 | 0.9601 | 0.0 | 0.3321 | 0.9451 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
sam1120/dropoff-utcustom-train-SF-RGB-b5_1 | sam1120 | 2024-02-12T14:40:35Z | 147 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2024-02-12T14:24:17Z | ---
license: other
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: dropoff-utcustom-train-SF-RGB-b5_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dropoff-utcustom-train-SF-RGB-b5_1
This model is a fine-tuned version of [nvidia/mit-b5](https://huggingface.co/nvidia/mit-b5) on the sam1120/dropoff-utcustom-TRAIN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6279
- Mean Iou: 0.4054
- Mean Accuracy: 0.7471
- Overall Accuracy: 0.8860
- Accuracy Unlabeled: nan
- Accuracy Dropoff: 0.5956
- Accuracy Undropoff: 0.8986
- Iou Unlabeled: 0.0
- Iou Dropoff: 0.3318
- Iou Undropoff: 0.8843
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 120
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Dropoff | Accuracy Undropoff | Iou Unlabeled | Iou Dropoff | Iou Undropoff |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:----------------:|:------------------:|:-------------:|:-----------:|:-------------:|
| 1.0071 | 5.0 | 10 | 1.0206 | 0.1745 | 0.2748 | 0.5034 | nan | 0.0255 | 0.5241 | 0.0 | 0.0147 | 0.5087 |
| 0.9688 | 10.0 | 20 | 0.9873 | 0.2140 | 0.3486 | 0.5771 | nan | 0.0992 | 0.5979 | 0.0 | 0.0582 | 0.5838 |
| 0.9406 | 15.0 | 30 | 0.9313 | 0.2613 | 0.4446 | 0.6655 | nan | 0.2038 | 0.6855 | 0.0 | 0.1135 | 0.6705 |
| 0.9278 | 20.0 | 40 | 0.8851 | 0.2930 | 0.5149 | 0.7111 | nan | 0.3009 | 0.7289 | 0.0 | 0.1648 | 0.7142 |
| 0.8956 | 25.0 | 50 | 0.8563 | 0.3118 | 0.5642 | 0.7358 | nan | 0.3770 | 0.7514 | 0.0 | 0.1985 | 0.7370 |
| 0.8674 | 30.0 | 60 | 0.8260 | 0.3303 | 0.6086 | 0.7664 | nan | 0.4366 | 0.7807 | 0.0 | 0.2246 | 0.7664 |
| 0.8438 | 35.0 | 70 | 0.8149 | 0.3347 | 0.6355 | 0.7671 | nan | 0.4921 | 0.7790 | 0.0 | 0.2381 | 0.7660 |
| 0.8309 | 40.0 | 80 | 0.7881 | 0.3459 | 0.6472 | 0.7847 | nan | 0.4972 | 0.7972 | 0.0 | 0.2539 | 0.7839 |
| 0.8069 | 45.0 | 90 | 0.7640 | 0.3567 | 0.6617 | 0.8041 | nan | 0.5063 | 0.8170 | 0.0 | 0.2668 | 0.8033 |
| 0.7779 | 50.0 | 100 | 0.7486 | 0.3637 | 0.6792 | 0.8145 | nan | 0.5316 | 0.8268 | 0.0 | 0.2778 | 0.8132 |
| 0.7695 | 55.0 | 110 | 0.7354 | 0.3684 | 0.6936 | 0.8214 | nan | 0.5542 | 0.8329 | 0.0 | 0.2858 | 0.8195 |
| 0.7568 | 60.0 | 120 | 0.7164 | 0.3757 | 0.7032 | 0.8365 | nan | 0.5577 | 0.8486 | 0.0 | 0.2924 | 0.8347 |
| 0.7285 | 65.0 | 130 | 0.6976 | 0.3836 | 0.7119 | 0.8484 | nan | 0.5630 | 0.8608 | 0.0 | 0.3042 | 0.8467 |
| 0.7217 | 70.0 | 140 | 0.6922 | 0.3857 | 0.7217 | 0.8499 | nan | 0.5817 | 0.8616 | 0.0 | 0.3091 | 0.8480 |
| 0.7095 | 75.0 | 150 | 0.6708 | 0.3926 | 0.7287 | 0.8624 | nan | 0.5828 | 0.8745 | 0.0 | 0.3172 | 0.8605 |
| 0.6944 | 80.0 | 160 | 0.6637 | 0.3951 | 0.7320 | 0.8660 | nan | 0.5858 | 0.8781 | 0.0 | 0.3212 | 0.8641 |
| 0.6878 | 85.0 | 170 | 0.6632 | 0.3942 | 0.7397 | 0.8673 | nan | 0.6005 | 0.8788 | 0.0 | 0.3175 | 0.8652 |
| 0.6868 | 90.0 | 180 | 0.6468 | 0.3998 | 0.7391 | 0.8756 | nan | 0.5902 | 0.8880 | 0.0 | 0.3257 | 0.8739 |
| 0.6581 | 95.0 | 190 | 0.6444 | 0.4003 | 0.7421 | 0.8776 | nan | 0.5942 | 0.8899 | 0.0 | 0.3249 | 0.8759 |
| 0.6587 | 100.0 | 200 | 0.6383 | 0.4026 | 0.7427 | 0.8814 | nan | 0.5914 | 0.8940 | 0.0 | 0.3281 | 0.8797 |
| 0.6525 | 105.0 | 210 | 0.6334 | 0.4032 | 0.7434 | 0.8825 | nan | 0.5918 | 0.8951 | 0.0 | 0.3289 | 0.8808 |
| 0.658 | 110.0 | 220 | 0.6345 | 0.4026 | 0.7451 | 0.8811 | nan | 0.5968 | 0.8934 | 0.0 | 0.3285 | 0.8793 |
| 0.6575 | 115.0 | 230 | 0.6300 | 0.4050 | 0.7463 | 0.8851 | nan | 0.5948 | 0.8977 | 0.0 | 0.3314 | 0.8835 |
| 0.6625 | 120.0 | 240 | 0.6279 | 0.4054 | 0.7471 | 0.8860 | nan | 0.5956 | 0.8986 | 0.0 | 0.3318 | 0.8843 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Stoub/Stoub-ppo-LunarLander-v2 | Stoub | 2024-02-12T14:40:01Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-07T22:37:59Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 261.40 +/- 21.63
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
sam1120/dropoff-utcustom-train-SF-RGB-b5_3 | sam1120 | 2024-02-12T14:40:00Z | 155 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2024-02-12T14:24:49Z | ---
license: other
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: dropoff-utcustom-train-SF-RGB-b5_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dropoff-utcustom-train-SF-RGB-b5_3
This model is a fine-tuned version of [nvidia/mit-b5](https://huggingface.co/nvidia/mit-b5) on the sam1120/dropoff-utcustom-TRAIN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3770
- Mean Iou: 0.4572
- Mean Accuracy: 0.7822
- Overall Accuracy: 0.9640
- Accuracy Unlabeled: nan
- Accuracy Dropoff: 0.5839
- Accuracy Undropoff: 0.9805
- Iou Unlabeled: 0.0
- Iou Dropoff: 0.4086
- Iou Undropoff: 0.9631
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 15
- eval_batch_size: 15
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 120
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Dropoff | Accuracy Undropoff | Iou Unlabeled | Iou Dropoff | Iou Undropoff |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:----------------:|:------------------:|:-------------:|:-----------:|:-------------:|
| 1.3135 | 5.0 | 10 | 1.2008 | 0.0546 | 0.2586 | 0.1227 | nan | 0.4069 | 0.1103 | 0.0 | 0.0535 | 0.1102 |
| 1.2309 | 10.0 | 20 | 1.1294 | 0.1176 | 0.3397 | 0.2490 | nan | 0.4388 | 0.2407 | 0.0 | 0.1129 | 0.2400 |
| 1.1346 | 15.0 | 30 | 1.0395 | 0.2171 | 0.4865 | 0.5022 | nan | 0.4694 | 0.5036 | 0.0 | 0.1524 | 0.4989 |
| 1.1088 | 20.0 | 40 | 0.9755 | 0.2608 | 0.5521 | 0.6176 | nan | 0.4808 | 0.6235 | 0.0 | 0.1661 | 0.6163 |
| 1.007 | 25.0 | 50 | 0.9197 | 0.2895 | 0.5959 | 0.6775 | nan | 0.5068 | 0.6849 | 0.0 | 0.1923 | 0.6763 |
| 0.9145 | 30.0 | 60 | 0.8635 | 0.3162 | 0.6299 | 0.7335 | nan | 0.5168 | 0.7429 | 0.0 | 0.2156 | 0.7329 |
| 0.8745 | 35.0 | 70 | 0.8070 | 0.3398 | 0.6784 | 0.7808 | nan | 0.5667 | 0.7901 | 0.0 | 0.2404 | 0.7791 |
| 0.8088 | 40.0 | 80 | 0.7442 | 0.3667 | 0.7191 | 0.8290 | nan | 0.5993 | 0.8389 | 0.0 | 0.2730 | 0.8272 |
| 0.7184 | 45.0 | 90 | 0.6956 | 0.3832 | 0.7513 | 0.8603 | nan | 0.6323 | 0.8702 | 0.0 | 0.2915 | 0.8580 |
| 0.6908 | 50.0 | 100 | 0.6751 | 0.3931 | 0.7592 | 0.8748 | nan | 0.6332 | 0.8853 | 0.0 | 0.3067 | 0.8728 |
| 0.643 | 55.0 | 110 | 0.6101 | 0.4134 | 0.7714 | 0.9108 | nan | 0.6194 | 0.9234 | 0.0 | 0.3308 | 0.9094 |
| 0.6014 | 60.0 | 120 | 0.5971 | 0.4166 | 0.7826 | 0.9189 | nan | 0.6339 | 0.9313 | 0.0 | 0.3324 | 0.9175 |
| 0.5685 | 65.0 | 130 | 0.5595 | 0.4304 | 0.7946 | 0.9328 | nan | 0.6439 | 0.9453 | 0.0 | 0.3599 | 0.9314 |
| 0.5172 | 70.0 | 140 | 0.5344 | 0.4373 | 0.8010 | 0.9406 | nan | 0.6488 | 0.9532 | 0.0 | 0.3727 | 0.9393 |
| 0.4757 | 75.0 | 150 | 0.4963 | 0.4434 | 0.7997 | 0.9490 | nan | 0.6368 | 0.9626 | 0.0 | 0.3822 | 0.9479 |
| 0.4288 | 80.0 | 160 | 0.4599 | 0.4488 | 0.7936 | 0.9556 | nan | 0.6169 | 0.9702 | 0.0 | 0.3918 | 0.9546 |
| 0.4124 | 85.0 | 170 | 0.4710 | 0.4469 | 0.7989 | 0.9540 | nan | 0.6296 | 0.9681 | 0.0 | 0.3876 | 0.9529 |
| 0.4995 | 90.0 | 180 | 0.4209 | 0.4537 | 0.7883 | 0.9606 | nan | 0.6004 | 0.9762 | 0.0 | 0.4015 | 0.9597 |
| 0.3815 | 95.0 | 190 | 0.4287 | 0.4524 | 0.7919 | 0.9595 | nan | 0.6090 | 0.9748 | 0.0 | 0.3988 | 0.9586 |
| 0.3764 | 100.0 | 200 | 0.4245 | 0.4529 | 0.7913 | 0.9600 | nan | 0.6073 | 0.9753 | 0.0 | 0.3998 | 0.9590 |
| 0.4074 | 105.0 | 210 | 0.4096 | 0.4542 | 0.7894 | 0.9613 | nan | 0.6018 | 0.9769 | 0.0 | 0.4021 | 0.9603 |
| 0.3975 | 110.0 | 220 | 0.4107 | 0.4538 | 0.7905 | 0.9610 | nan | 0.6045 | 0.9765 | 0.0 | 0.4013 | 0.9601 |
| 0.3598 | 115.0 | 230 | 0.3918 | 0.4558 | 0.7863 | 0.9627 | nan | 0.5939 | 0.9787 | 0.0 | 0.4057 | 0.9618 |
| 0.3709 | 120.0 | 240 | 0.3770 | 0.4572 | 0.7822 | 0.9640 | nan | 0.5839 | 0.9805 | 0.0 | 0.4086 | 0.9631 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Guilherme34/Jennifer-uwu-version | Guilherme34 | 2024-02-12T14:23:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-12T14:23:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bartowski/HerculeanSea-7b-128k-exl2 | bartowski | 2024-02-12T14:21:45Z | 0 | 1 | transformers | [
"transformers",
"mergekit",
"merge",
"text-generation",
"base_model:Locutusque/Hercules-2.0-Mistral-7B",
"base_model:finetune:Locutusque/Hercules-2.0-Mistral-7B",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-12T14:05:20Z | ---
base_model:
- Test157t/Pasta-Sea-7b-128k
- Locutusque/Hercules-2.0-Mistral-7B
library_name: transformers
tags:
- mergekit
- merge
quantized_by: bartowski
pipeline_tag: text-generation
---
## Exllama v2 Quantizations of HerculeanSea-7b-128k
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.13">turboderp's ExLlamaV2 v0.0.13</a> for quantization.
<b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/Test157t/HerculeanSea-7b-128k
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
| ----- | ---- | ------- | ------ | ------ | ------ | ------------ |
| [8_0](https://huggingface.co/bartowski/HerculeanSea-7b-128k-exl2/tree/8_0) | 8.0 | 8.0 | 8.4 GB | 9.8 GB | 11.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/bartowski/HerculeanSea-7b-128k-exl2/tree/6_5) | 6.5 | 8.0 | 7.2 GB | 8.6 GB | 10.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/bartowski/HerculeanSea-7b-128k-exl2/tree/5_0) | 5.0 | 6.0 | 6.0 GB | 7.4 GB | 9.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
| [4_25](https://huggingface.co/bartowski/HerculeanSea-7b-128k-exl2/tree/4_25) | 4.25 | 6.0 | 5.3 GB | 6.7 GB | 8.7 GB | GPTQ equivalent bits per weight, slightly higher quality. |
| [3_5](https://huggingface.co/bartowski/HerculeanSea-7b-128k-exl2/tree/3_5) | 3.5 | 6.0 | 4.7 GB | 6.1 GB | 8.1 GB | Lower quality, only use if you have to. |
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/HerculeanSea-7b-128k-exl2 HerculeanSea-7b-128k-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `HerculeanSea-7b-128k-exl2`:
```shell
mkdir HerculeanSea-7b-128k-exl2
huggingface-cli download bartowski/HerculeanSea-7b-128k-exl2 --local-dir HerculeanSea-7b-128k-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
Linux:
```shell
mkdir HerculeanSea-7b-128k-exl2-6_5
huggingface-cli download bartowski/HerculeanSea-7b-128k-exl2 --revision 6_5 --local-dir HerculeanSea-7b-128k-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
mkdir HerculeanSea-7b-128k-exl2-6.5
huggingface-cli download bartowski/HerculeanSea-7b-128k-exl2 --revision 6_5 --local-dir HerculeanSea-7b-128k-exl2-6.5 --local-dir-use-symlinks False
```
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski |
Shijia/furina_seed42_eng_kin_amh_roman | Shijia | 2024-02-12T14:19:22Z | 91 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:yihongLiu/furina",
"base_model:finetune:yihongLiu/furina",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-12T14:18:30Z | ---
base_model: yihongLiu/furina
tags:
- generated_from_trainer
model-index:
- name: furina_seed42_eng_kin_amh_roman
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# furina_seed42_eng_kin_amh_roman
This model is a fine-tuned version of [yihongLiu/furina](https://huggingface.co/yihongLiu/furina) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0284
- Spearman Corr: 0.7771
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Spearman Corr |
|:-------------:|:-----:|:----:|:---------------:|:-------------:|
| No log | 0.65 | 200 | 0.0373 | 0.5747 |
| No log | 1.3 | 400 | 0.0297 | 0.6851 |
| No log | 1.95 | 600 | 0.0311 | 0.7236 |
| 0.0545 | 2.61 | 800 | 0.0305 | 0.7322 |
| 0.0545 | 3.26 | 1000 | 0.0281 | 0.7496 |
| 0.0545 | 3.91 | 1200 | 0.0278 | 0.7582 |
| 0.0208 | 4.56 | 1400 | 0.0278 | 0.7528 |
| 0.0208 | 5.21 | 1600 | 0.0238 | 0.7556 |
| 0.0208 | 5.86 | 1800 | 0.0235 | 0.7631 |
| 0.0143 | 6.51 | 2000 | 0.0245 | 0.7634 |
| 0.0143 | 7.17 | 2200 | 0.0243 | 0.7619 |
| 0.0143 | 7.82 | 2400 | 0.0242 | 0.7651 |
| 0.0102 | 8.47 | 2600 | 0.0257 | 0.7645 |
| 0.0102 | 9.12 | 2800 | 0.0271 | 0.7713 |
| 0.0102 | 9.77 | 3000 | 0.0255 | 0.7661 |
| 0.0079 | 10.42 | 3200 | 0.0218 | 0.7720 |
| 0.0079 | 11.07 | 3400 | 0.0250 | 0.7658 |
| 0.0079 | 11.73 | 3600 | 0.0266 | 0.7628 |
| 0.0064 | 12.38 | 3800 | 0.0267 | 0.7657 |
| 0.0064 | 13.03 | 4000 | 0.0261 | 0.7680 |
| 0.0064 | 13.68 | 4200 | 0.0232 | 0.7720 |
| 0.0055 | 14.33 | 4400 | 0.0256 | 0.7737 |
| 0.0055 | 14.98 | 4600 | 0.0237 | 0.7736 |
| 0.0055 | 15.64 | 4800 | 0.0284 | 0.7771 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
|
NBA55/llama2-7B-without-diversity-epoch-10-new | NBA55 | 2024-02-12T14:09:12Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2024-02-12T14:09:02Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
defog/sqlcoder-7b-2 | defog | 2024-02-12T14:06:11Z | 132,640 | 311 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-05T14:36:51Z | ---
license: cc-by-sa-4.0
library_name: transformers
pipeline_tag: text-generation
---
# Update notice
The model weights were updated at 7 AM UTC on Feb 7, 2024. The new model weights lead to a much more performant model – particularly for joins.
If you downloaded the model before that, please redownload the weights for best performance.
# Model Card for SQLCoder-7B-2
A capable large language model for natural language to SQL generation.

## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [Defog, Inc](https://defog.ai)
- **Model type:** [Text to SQL]
- **License:** [CC-by-SA-4.0]
- **Finetuned from model:** [CodeLlama-7B]
### Model Sources [optional]
- [**HuggingFace:**](https://huggingface.co/defog/sqlcoder-70b-alpha)
- [**GitHub:**](https://github.com/defog-ai/sqlcoder)
- [**Demo:**](https://defog.ai/sqlcoder-demo/)
## Uses
This model is intended to be used by non-technical users to understand data inside their SQL databases. It is meant as an analytics tool, and not as a database admin tool.
This model has not been trained to reject malicious requests from users with write access to databases, and should only be used by users with read-only access.
## How to Get Started with the Model
Use the code [here](https://github.com/defog-ai/sqlcoder/blob/main/inference.py) to get started with the model.
## Prompt
Please use the following prompt for optimal results. Please remember to use `do_sample=False` and `num_beams=4` for optimal results.
```
### Task
Generate a SQL query to answer [QUESTION]{user_question}[/QUESTION]
### Database Schema
The query will run on a database with the following schema:
{table_metadata_string_DDL_statements}
### Answer
Given the database schema, here is the SQL query that [QUESTION]{user_question}[/QUESTION]
[SQL]
```
## Evaluation
This model was evaluated on [SQL-Eval](https://github.com/defog-ai/sql-eval), a PostgreSQL based evaluation framework developed by Defog for testing and alignment of model capabilities.
You can read more about the methodology behind SQLEval [here](https://defog.ai/blog/open-sourcing-sqleval/).
### Results
We classified each generated question into one of 6 categories. The table displays the percentage of questions answered correctly by each model, broken down by category.
| | date | group_by | order_by | ratio | join | where |
| -------------- | ---- | -------- | -------- | ----- | ---- | ----- |
| sqlcoder-70b | 96 | 91.4 | 97.1 | 85.7 | 97.1 | 91.4 |
| sqlcoder-7b-2 | 96 | 91.4 | 94.3 | 91.4 | 94.3 | 77.1 |
| sqlcoder-34b | 80 | 94.3 | 85.7 | 77.1 | 85.7 | 80 |
| gpt-4 | 72 | 94.3 | 97.1 | 80 | 91.4 | 80 |
| gpt-4-turbo | 76 | 91.4 | 91.4 | 62.8 | 88.6 | 77.1 |
| natural-sql-7b | 56 | 88.6 | 85.7 | 60 | 88.6 | 80 |
| sqlcoder-7b | 64 | 82.9 | 74.3 | 54.3 | 74.3 | 74.3 |
| gpt-3.5 | 72 | 77.1 | 82.8 | 34.3 | 65.7 | 71.4 |
| claude-2 | 52 | 71.4 | 74.3 | 57.1 | 65.7 | 62.9 |
## Model Card Contact
Contact us on X at [@defogdata](https://twitter.com/defogdata), or on email at [[email protected]](mailto:[email protected]) |
giulio98/placeholder | giulio98 | 2024-02-12T13:58:57Z | 0 | 0 | null | [
"mteb",
"model-index",
"region:us"
] | null | 2024-02-12T13:50:09Z | ---
tags:
- mteb
model-index:
- name: bge_finetuned
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 61.64179104477612
- type: ap
value: 25.20497978200253
- type: f1
value: 55.51169205110252
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 58.6114
- type: ap
value: 55.013881977883706
- type: f1
value: 58.0798269108889
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 27.009999999999994
- type: f1
value: 26.230644551993027
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 14.011000000000001
- type: map_at_10
value: 24.082
- type: map_at_100
value: 25.273
- type: map_at_1000
value: 25.336
- type: map_at_3
value: 20.341
- type: map_at_5
value: 22.155
- type: mrr_at_1
value: 14.651
- type: mrr_at_10
value: 24.306
- type: mrr_at_100
value: 25.503999999999998
- type: mrr_at_1000
value: 25.566
- type: mrr_at_3
value: 20.59
- type: mrr_at_5
value: 22.400000000000002
- type: ndcg_at_1
value: 14.011000000000001
- type: ndcg_at_10
value: 30.316
- type: ndcg_at_100
value: 36.146
- type: ndcg_at_1000
value: 37.972
- type: ndcg_at_3
value: 22.422
- type: ndcg_at_5
value: 25.727
- type: precision_at_1
value: 14.011000000000001
- type: precision_at_10
value: 5.0569999999999995
- type: precision_at_100
value: 0.7799999999999999
- type: precision_at_1000
value: 0.093
- type: precision_at_3
value: 9.483
- type: precision_at_5
value: 7.312
- type: recall_at_1
value: 14.011000000000001
- type: recall_at_10
value: 50.568999999999996
- type: recall_at_100
value: 77.952
- type: recall_at_1000
value: 92.674
- type: recall_at_3
value: 28.449999999999996
- type: recall_at_5
value: 36.558
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 21.580787107217457
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 12.755947651867459
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 50.36895415359604
- type: mrr
value: 62.93244075100032
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 54.84190098866484
- type: cos_sim_spearman
value: 52.065644182348144
- type: euclidean_pearson
value: 54.181073661388034
- type: euclidean_spearman
value: 52.065644182348144
- type: manhattan_pearson
value: 54.98368207013862
- type: manhattan_spearman
value: 53.66387337016872
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 63.48051948051948
- type: f1
value: 61.45740352513437
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 16.23123129183937
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 6.846095550717324
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 14.587
- type: map_at_10
value: 20.032
- type: map_at_100
value: 21.2
- type: map_at_1000
value: 21.351
- type: map_at_3
value: 18.224
- type: map_at_5
value: 19.028
- type: mrr_at_1
value: 18.312
- type: mrr_at_10
value: 24.343999999999998
- type: mrr_at_100
value: 25.302000000000003
- type: mrr_at_1000
value: 25.385
- type: mrr_at_3
value: 22.461000000000002
- type: mrr_at_5
value: 23.219
- type: ndcg_at_1
value: 18.312
- type: ndcg_at_10
value: 24.05
- type: ndcg_at_100
value: 29.512
- type: ndcg_at_1000
value: 33.028999999999996
- type: ndcg_at_3
value: 20.947
- type: ndcg_at_5
value: 21.807000000000002
- type: precision_at_1
value: 18.312
- type: precision_at_10
value: 4.664
- type: precision_at_100
value: 0.9570000000000001
- type: precision_at_1000
value: 0.155
- type: precision_at_3
value: 10.11
- type: precision_at_5
value: 7.066999999999999
- type: recall_at_1
value: 14.587
- type: recall_at_10
value: 31.865
- type: recall_at_100
value: 55.922000000000004
- type: recall_at_1000
value: 80.878
- type: recall_at_3
value: 22.229
- type: recall_at_5
value: 25.09
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.456
- type: map_at_10
value: 11.429
- type: map_at_100
value: 11.956
- type: map_at_1000
value: 12.04
- type: map_at_3
value: 10.309
- type: map_at_5
value: 11.006
- type: mrr_at_1
value: 10.637
- type: mrr_at_10
value: 14.047
- type: mrr_at_100
value: 14.591999999999999
- type: mrr_at_1000
value: 14.66
- type: mrr_at_3
value: 12.876999999999999
- type: mrr_at_5
value: 13.644
- type: ndcg_at_1
value: 10.637
- type: ndcg_at_10
value: 13.623
- type: ndcg_at_100
value: 16.337
- type: ndcg_at_1000
value: 18.881
- type: ndcg_at_3
value: 11.76
- type: ndcg_at_5
value: 12.803
- type: precision_at_1
value: 10.637
- type: precision_at_10
value: 2.611
- type: precision_at_100
value: 0.49899999999999994
- type: precision_at_1000
value: 0.08800000000000001
- type: precision_at_3
value: 5.7540000000000004
- type: precision_at_5
value: 4.306
- type: recall_at_1
value: 8.456
- type: recall_at_10
value: 17.543
- type: recall_at_100
value: 29.696
- type: recall_at_1000
value: 48.433
- type: recall_at_3
value: 12.299
- type: recall_at_5
value: 15.126000000000001
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.517999999999999
- type: map_at_10
value: 14.924999999999999
- type: map_at_100
value: 15.716
- type: map_at_1000
value: 15.804000000000002
- type: map_at_3
value: 13.228000000000002
- type: map_at_5
value: 14.155999999999999
- type: mrr_at_1
value: 12.790000000000001
- type: mrr_at_10
value: 17.122999999999998
- type: mrr_at_100
value: 17.874000000000002
- type: mrr_at_1000
value: 17.947
- type: mrr_at_3
value: 15.528
- type: mrr_at_5
value: 16.421
- type: ndcg_at_1
value: 12.790000000000001
- type: ndcg_at_10
value: 17.967
- type: ndcg_at_100
value: 22.016
- type: ndcg_at_1000
value: 24.57
- type: ndcg_at_3
value: 14.745
- type: ndcg_at_5
value: 16.247
- type: precision_at_1
value: 12.790000000000001
- type: precision_at_10
value: 3.229
- type: precision_at_100
value: 0.592
- type: precision_at_1000
value: 0.087
- type: precision_at_3
value: 6.792
- type: precision_at_5
value: 5.066
- type: recall_at_1
value: 10.517999999999999
- type: recall_at_10
value: 25.194
- type: recall_at_100
value: 43.858999999999995
- type: recall_at_1000
value: 63.410999999999994
- type: recall_at_3
value: 16.384999999999998
- type: recall_at_5
value: 20.09
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.325000000000001
- type: map_at_10
value: 12.262
- type: map_at_100
value: 13.003
- type: map_at_1000
value: 13.126999999999999
- type: map_at_3
value: 10.946
- type: map_at_5
value: 11.581
- type: mrr_at_1
value: 9.379
- type: mrr_at_10
value: 13.527000000000001
- type: mrr_at_100
value: 14.249999999999998
- type: mrr_at_1000
value: 14.365
- type: mrr_at_3
value: 12.166
- type: mrr_at_5
value: 12.798000000000002
- type: ndcg_at_1
value: 9.379
- type: ndcg_at_10
value: 14.878
- type: ndcg_at_100
value: 19.17
- type: ndcg_at_1000
value: 22.861
- type: ndcg_at_3
value: 12.136
- type: ndcg_at_5
value: 13.209000000000001
- type: precision_at_1
value: 9.379
- type: precision_at_10
value: 2.5309999999999997
- type: precision_at_100
value: 0.505
- type: precision_at_1000
value: 0.086
- type: precision_at_3
value: 5.386
- type: precision_at_5
value: 3.887
- type: recall_at_1
value: 8.325000000000001
- type: recall_at_10
value: 21.886
- type: recall_at_100
value: 42.977
- type: recall_at_1000
value: 71.946
- type: recall_at_3
value: 14.123
- type: recall_at_5
value: 16.747
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.982
- type: map_at_10
value: 9.249
- type: map_at_100
value: 10.0
- type: map_at_1000
value: 10.127
- type: map_at_3
value: 7.913
- type: map_at_5
value: 8.540000000000001
- type: mrr_at_1
value: 7.960000000000001
- type: mrr_at_10
value: 11.703
- type: mrr_at_100
value: 12.43
- type: mrr_at_1000
value: 12.534999999999998
- type: mrr_at_3
value: 10.344000000000001
- type: mrr_at_5
value: 11.022
- type: ndcg_at_1
value: 7.960000000000001
- type: ndcg_at_10
value: 11.863
- type: ndcg_at_100
value: 16.086
- type: ndcg_at_1000
value: 19.738
- type: ndcg_at_3
value: 9.241000000000001
- type: ndcg_at_5
value: 10.228
- type: precision_at_1
value: 7.960000000000001
- type: precision_at_10
value: 2.4
- type: precision_at_100
value: 0.534
- type: precision_at_1000
value: 0.097
- type: precision_at_3
value: 4.561
- type: precision_at_5
value: 3.408
- type: recall_at_1
value: 5.982
- type: recall_at_10
value: 17.669999999999998
- type: recall_at_100
value: 37.261
- type: recall_at_1000
value: 64.416
- type: recall_at_3
value: 10.376000000000001
- type: recall_at_5
value: 12.933
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.068
- type: map_at_10
value: 12.101
- type: map_at_100
value: 12.828000000000001
- type: map_at_1000
value: 12.953000000000001
- type: map_at_3
value: 11.047
- type: map_at_5
value: 11.542
- type: mrr_at_1
value: 10.972
- type: mrr_at_10
value: 14.873
- type: mrr_at_100
value: 15.584000000000001
- type: mrr_at_1000
value: 15.681999999999999
- type: mrr_at_3
value: 13.523
- type: mrr_at_5
value: 14.254
- type: ndcg_at_1
value: 10.972
- type: ndcg_at_10
value: 14.557999999999998
- type: ndcg_at_100
value: 18.56
- type: ndcg_at_1000
value: 21.975
- type: ndcg_at_3
value: 12.436
- type: ndcg_at_5
value: 13.270999999999999
- type: precision_at_1
value: 10.972
- type: precision_at_10
value: 2.714
- type: precision_at_100
value: 0.5720000000000001
- type: precision_at_1000
value: 0.10200000000000001
- type: precision_at_3
value: 5.711
- type: precision_at_5
value: 4.1579999999999995
- type: recall_at_1
value: 9.068
- type: recall_at_10
value: 19.381999999999998
- type: recall_at_100
value: 37.602999999999994
- type: recall_at_1000
value: 62.376
- type: recall_at_3
value: 13.48
- type: recall_at_5
value: 15.506
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.206
- type: map_at_10
value: 12.032
- type: map_at_100
value: 12.992
- type: map_at_1000
value: 13.135
- type: map_at_3
value: 10.741
- type: map_at_5
value: 11.392
- type: mrr_at_1
value: 10.502
- type: mrr_at_10
value: 14.818999999999999
- type: mrr_at_100
value: 15.716
- type: mrr_at_1000
value: 15.823
- type: mrr_at_3
value: 13.375
- type: mrr_at_5
value: 14.169
- type: ndcg_at_1
value: 10.502
- type: ndcg_at_10
value: 14.790000000000001
- type: ndcg_at_100
value: 19.881999999999998
- type: ndcg_at_1000
value: 23.703
- type: ndcg_at_3
value: 12.281
- type: ndcg_at_5
value: 13.33
- type: precision_at_1
value: 10.502
- type: precision_at_10
value: 2.911
- type: precision_at_100
value: 0.668
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 6.012
- type: precision_at_5
value: 4.475
- type: recall_at_1
value: 8.206
- type: recall_at_10
value: 20.508000000000003
- type: recall_at_100
value: 43.568
- type: recall_at_1000
value: 71.56400000000001
- type: recall_at_3
value: 13.607
- type: recall_at_5
value: 16.211000000000002
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.4159999999999995
- type: map_at_10
value: 9.581000000000001
- type: map_at_100
value: 10.123999999999999
- type: map_at_1000
value: 10.226
- type: map_at_3
value: 8.51
- type: map_at_5
value: 9.078999999999999
- type: mrr_at_1
value: 7.515
- type: mrr_at_10
value: 10.801
- type: mrr_at_100
value: 11.373
- type: mrr_at_1000
value: 11.466999999999999
- type: mrr_at_3
value: 9.637
- type: mrr_at_5
value: 10.197000000000001
- type: ndcg_at_1
value: 7.515
- type: ndcg_at_10
value: 11.776
- type: ndcg_at_100
value: 14.776
- type: ndcg_at_1000
value: 17.7
- type: ndcg_at_3
value: 9.515
- type: ndcg_at_5
value: 10.511
- type: precision_at_1
value: 7.515
- type: precision_at_10
value: 2.086
- type: precision_at_100
value: 0.402
- type: precision_at_1000
value: 0.07100000000000001
- type: precision_at_3
value: 4.397
- type: precision_at_5
value: 3.19
- type: recall_at_1
value: 6.4159999999999995
- type: recall_at_10
value: 17.468
- type: recall_at_100
value: 31.398
- type: recall_at_1000
value: 53.686
- type: recall_at_3
value: 11.379999999999999
- type: recall_at_5
value: 13.745
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.646
- type: map_at_10
value: 7.047000000000001
- type: map_at_100
value: 7.697
- type: map_at_1000
value: 7.806
- type: map_at_3
value: 6.258
- type: map_at_5
value: 6.628
- type: mrr_at_1
value: 5.919
- type: mrr_at_10
value: 8.767999999999999
- type: mrr_at_100
value: 9.434
- type: mrr_at_1000
value: 9.524000000000001
- type: mrr_at_3
value: 7.8
- type: mrr_at_5
value: 8.275
- type: ndcg_at_1
value: 5.919
- type: ndcg_at_10
value: 8.927999999999999
- type: ndcg_at_100
value: 12.467
- type: ndcg_at_1000
value: 15.674
- type: ndcg_at_3
value: 7.3260000000000005
- type: ndcg_at_5
value: 7.931000000000001
- type: precision_at_1
value: 5.919
- type: precision_at_10
value: 1.7760000000000002
- type: precision_at_100
value: 0.438
- type: precision_at_1000
value: 0.086
- type: precision_at_3
value: 3.6249999999999996
- type: precision_at_5
value: 2.657
- type: recall_at_1
value: 4.646
- type: recall_at_10
value: 12.973
- type: recall_at_100
value: 29.444
- type: recall_at_1000
value: 53.413999999999994
- type: recall_at_3
value: 8.378
- type: recall_at_5
value: 9.957
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.202
- type: map_at_10
value: 13.402
- type: map_at_100
value: 14.330000000000002
- type: map_at_1000
value: 14.455000000000002
- type: map_at_3
value: 11.916
- type: map_at_5
value: 12.828000000000001
- type: mrr_at_1
value: 10.634
- type: mrr_at_10
value: 15.528
- type: mrr_at_100
value: 16.393
- type: mrr_at_1000
value: 16.497999999999998
- type: mrr_at_3
value: 13.837
- type: mrr_at_5
value: 14.821000000000002
- type: ndcg_at_1
value: 10.634
- type: ndcg_at_10
value: 16.267
- type: ndcg_at_100
value: 21.149
- type: ndcg_at_1000
value: 24.509
- type: ndcg_at_3
value: 13.320000000000002
- type: ndcg_at_5
value: 14.857000000000001
- type: precision_at_1
value: 10.634
- type: precision_at_10
value: 2.948
- type: precision_at_100
value: 0.618
- type: precision_at_1000
value: 0.10200000000000001
- type: precision_at_3
value: 6.188
- type: precision_at_5
value: 4.7010000000000005
- type: recall_at_1
value: 9.202
- type: recall_at_10
value: 22.921
- type: recall_at_100
value: 45.292
- type: recall_at_1000
value: 69.853
- type: recall_at_3
value: 15.126000000000001
- type: recall_at_5
value: 18.863
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 11.278
- type: map_at_10
value: 15.72
- type: map_at_100
value: 16.832
- type: map_at_1000
value: 17.025000000000002
- type: map_at_3
value: 13.852999999999998
- type: map_at_5
value: 14.654
- type: mrr_at_1
value: 14.822
- type: mrr_at_10
value: 19.564
- type: mrr_at_100
value: 20.509
- type: mrr_at_1000
value: 20.607
- type: mrr_at_3
value: 17.721
- type: mrr_at_5
value: 18.451999999999998
- type: ndcg_at_1
value: 14.822
- type: ndcg_at_10
value: 19.548
- type: ndcg_at_100
value: 24.734
- type: ndcg_at_1000
value: 28.832
- type: ndcg_at_3
value: 16.14
- type: ndcg_at_5
value: 17.253
- type: precision_at_1
value: 14.822
- type: precision_at_10
value: 3.972
- type: precision_at_100
value: 0.943
- type: precision_at_1000
value: 0.183
- type: precision_at_3
value: 7.642
- type: precision_at_5
value: 5.6129999999999995
- type: recall_at_1
value: 11.278
- type: recall_at_10
value: 27.006999999999998
- type: recall_at_100
value: 51.012
- type: recall_at_1000
value: 79.833
- type: recall_at_3
value: 16.785
- type: recall_at_5
value: 19.82
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.305
- type: map_at_10
value: 9.099
- type: map_at_100
value: 9.927999999999999
- type: map_at_1000
value: 10.027
- type: map_at_3
value: 7.7700000000000005
- type: map_at_5
value: 8.333
- type: mrr_at_1
value: 6.1
- type: mrr_at_10
value: 10.227
- type: mrr_at_100
value: 11.057
- type: mrr_at_1000
value: 11.151
- type: mrr_at_3
value: 8.842
- type: mrr_at_5
value: 9.442
- type: ndcg_at_1
value: 6.1
- type: ndcg_at_10
value: 11.769
- type: ndcg_at_100
value: 16.378999999999998
- type: ndcg_at_1000
value: 19.517
- type: ndcg_at_3
value: 8.936
- type: ndcg_at_5
value: 9.907
- type: precision_at_1
value: 6.1
- type: precision_at_10
value: 2.181
- type: precision_at_100
value: 0.481
- type: precision_at_1000
value: 0.08099999999999999
- type: precision_at_3
value: 4.19
- type: precision_at_5
value: 3.031
- type: recall_at_1
value: 5.305
- type: recall_at_10
value: 19.236
- type: recall_at_100
value: 41.333999999999996
- type: recall_at_1000
value: 65.96600000000001
- type: recall_at_3
value: 11.189
- type: recall_at_5
value: 13.592
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.882
- type: map_at_10
value: 1.6
- type: map_at_100
value: 1.894
- type: map_at_1000
value: 1.9640000000000002
- type: map_at_3
value: 1.345
- type: map_at_5
value: 1.444
- type: mrr_at_1
value: 2.2800000000000002
- type: mrr_at_10
value: 3.8510000000000004
- type: mrr_at_100
value: 4.401
- type: mrr_at_1000
value: 4.472
- type: mrr_at_3
value: 3.2359999999999998
- type: mrr_at_5
value: 3.519
- type: ndcg_at_1
value: 2.2800000000000002
- type: ndcg_at_10
value: 2.5829999999999997
- type: ndcg_at_100
value: 4.629
- type: ndcg_at_1000
value: 6.709
- type: ndcg_at_3
value: 1.978
- type: ndcg_at_5
value: 2.133
- type: precision_at_1
value: 2.2800000000000002
- type: precision_at_10
value: 0.86
- type: precision_at_100
value: 0.298
- type: precision_at_1000
value: 0.065
- type: precision_at_3
value: 1.52
- type: precision_at_5
value: 1.173
- type: recall_at_1
value: 0.882
- type: recall_at_10
value: 3.273
- type: recall_at_100
value: 11.254
- type: recall_at_1000
value: 23.988
- type: recall_at_3
value: 1.818
- type: recall_at_5
value: 2.236
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 1.057
- type: map_at_10
value: 2.289
- type: map_at_100
value: 2.844
- type: map_at_1000
value: 3.026
- type: map_at_3
value: 1.661
- type: map_at_5
value: 1.931
- type: mrr_at_1
value: 12.75
- type: mrr_at_10
value: 17.645
- type: mrr_at_100
value: 18.312
- type: mrr_at_1000
value: 18.385
- type: mrr_at_3
value: 15.958
- type: mrr_at_5
value: 17.046
- type: ndcg_at_1
value: 10.0
- type: ndcg_at_10
value: 6.890000000000001
- type: ndcg_at_100
value: 7.131
- type: ndcg_at_1000
value: 9.725
- type: ndcg_at_3
value: 8.222
- type: ndcg_at_5
value: 7.536
- type: precision_at_1
value: 12.75
- type: precision_at_10
value: 5.925
- type: precision_at_100
value: 1.6469999999999998
- type: precision_at_1000
value: 0.40299999999999997
- type: precision_at_3
value: 9.667
- type: precision_at_5
value: 8.0
- type: recall_at_1
value: 1.057
- type: recall_at_10
value: 3.8580000000000005
- type: recall_at_100
value: 8.685
- type: recall_at_1000
value: 17.605
- type: recall_at_3
value: 2.041
- type: recall_at_5
value: 2.811
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 20.674999999999997
- type: f1
value: 17.79184478487413
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.637
- type: map_at_10
value: 3.9730000000000003
- type: map_at_100
value: 4.228
- type: map_at_1000
value: 4.268000000000001
- type: map_at_3
value: 3.542
- type: map_at_5
value: 3.763
- type: mrr_at_1
value: 2.7449999999999997
- type: mrr_at_10
value: 4.146
- type: mrr_at_100
value: 4.42
- type: mrr_at_1000
value: 4.460999999999999
- type: mrr_at_3
value: 3.695
- type: mrr_at_5
value: 3.925
- type: ndcg_at_1
value: 2.7449999999999997
- type: ndcg_at_10
value: 4.801
- type: ndcg_at_100
value: 6.198
- type: ndcg_at_1000
value: 7.468
- type: ndcg_at_3
value: 3.882
- type: ndcg_at_5
value: 4.283
- type: precision_at_1
value: 2.7449999999999997
- type: precision_at_10
value: 0.771
- type: precision_at_100
value: 0.152
- type: precision_at_1000
value: 0.027
- type: precision_at_3
value: 1.6549999999999998
- type: precision_at_5
value: 1.206
- type: recall_at_1
value: 2.637
- type: recall_at_10
value: 7.2669999999999995
- type: recall_at_100
value: 13.982
- type: recall_at_1000
value: 24.192
- type: recall_at_3
value: 4.712000000000001
- type: recall_at_5
value: 5.6739999999999995
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.91
- type: map_at_10
value: 5.721
- type: map_at_100
value: 6.489000000000001
- type: map_at_1000
value: 6.642
- type: map_at_3
value: 4.797
- type: map_at_5
value: 5.292
- type: mrr_at_1
value: 6.481000000000001
- type: mrr_at_10
value: 10.624
- type: mrr_at_100
value: 11.498999999999999
- type: mrr_at_1000
value: 11.599
- type: mrr_at_3
value: 9.285
- type: mrr_at_5
value: 10.003
- type: ndcg_at_1
value: 6.481000000000001
- type: ndcg_at_10
value: 8.303
- type: ndcg_at_100
value: 12.512
- type: ndcg_at_1000
value: 16.665
- type: ndcg_at_3
value: 6.827
- type: ndcg_at_5
value: 7.367
- type: precision_at_1
value: 6.481000000000001
- type: precision_at_10
value: 2.485
- type: precision_at_100
value: 0.668
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 4.733
- type: precision_at_5
value: 3.642
- type: recall_at_1
value: 2.91
- type: recall_at_10
value: 11.239
- type: recall_at_100
value: 27.877999999999997
- type: recall_at_1000
value: 54.507000000000005
- type: recall_at_3
value: 6.683
- type: recall_at_5
value: 8.591
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.073
- type: map_at_10
value: 2.919
- type: map_at_100
value: 3.107
- type: map_at_1000
value: 3.143
- type: map_at_3
value: 2.6100000000000003
- type: map_at_5
value: 2.773
- type: mrr_at_1
value: 4.146
- type: mrr_at_10
value: 5.657
- type: mrr_at_100
value: 5.970000000000001
- type: mrr_at_1000
value: 6.022
- type: mrr_at_3
value: 5.116
- type: mrr_at_5
value: 5.411
- type: ndcg_at_1
value: 4.146
- type: ndcg_at_10
value: 4.115
- type: ndcg_at_100
value: 5.319
- type: ndcg_at_1000
value: 6.584
- type: ndcg_at_3
value: 3.3709999999999996
- type: ndcg_at_5
value: 3.7159999999999997
- type: precision_at_1
value: 4.146
- type: precision_at_10
value: 0.983
- type: precision_at_100
value: 0.197
- type: precision_at_1000
value: 0.037
- type: precision_at_3
value: 2.152
- type: precision_at_5
value: 1.564
- type: recall_at_1
value: 2.073
- type: recall_at_10
value: 4.916
- type: recall_at_100
value: 9.844999999999999
- type: recall_at_1000
value: 18.454
- type: recall_at_3
value: 3.228
- type: recall_at_5
value: 3.91
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 53.28480000000001
- type: ap
value: 51.81084207241404
- type: f1
value: 52.83683146513476
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 2.613
- type: map_at_10
value: 4.33
- type: map_at_100
value: 4.681
- type: map_at_1000
value: 4.731
- type: map_at_3
value: 3.7560000000000002
- type: map_at_5
value: 4.035
- type: mrr_at_1
value: 2.665
- type: mrr_at_10
value: 4.436
- type: mrr_at_100
value: 4.797
- type: mrr_at_1000
value: 4.848
- type: mrr_at_3
value: 3.83
- type: mrr_at_5
value: 4.123
- type: ndcg_at_1
value: 2.665
- type: ndcg_at_10
value: 5.399
- type: ndcg_at_100
value: 7.402
- type: ndcg_at_1000
value: 9.08
- type: ndcg_at_3
value: 4.1579999999999995
- type: ndcg_at_5
value: 4.664
- type: precision_at_1
value: 2.665
- type: precision_at_10
value: 0.907
- type: precision_at_100
value: 0.19499999999999998
- type: precision_at_1000
value: 0.034
- type: precision_at_3
value: 1.791
- type: precision_at_5
value: 1.3299999999999998
- type: recall_at_1
value: 2.613
- type: recall_at_10
value: 8.729000000000001
- type: recall_at_100
value: 18.668000000000003
- type: recall_at_1000
value: 32.387
- type: recall_at_3
value: 5.25
- type: recall_at_5
value: 6.465
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 73.57729138166896
- type: f1
value: 71.0267308110663
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 38.76652986776106
- type: f1
value: 24.385724192837007
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 43.43308675184936
- type: f1
value: 39.072401899805016
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 55.225285810356425
- type: f1
value: 49.81719052485716
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 20.583405653329283
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 17.155646378261917
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 24.26316550665883
- type: mrr
value: 23.951621402458755
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 1.4040000000000001
- type: map_at_10
value: 2.199
- type: map_at_100
value: 2.597
- type: map_at_1000
value: 3.15
- type: map_at_3
value: 1.7850000000000001
- type: map_at_5
value: 2.005
- type: mrr_at_1
value: 13.932
- type: mrr_at_10
value: 19.529
- type: mrr_at_100
value: 20.53
- type: mrr_at_1000
value: 20.635
- type: mrr_at_3
value: 17.647
- type: mrr_at_5
value: 18.731
- type: ndcg_at_1
value: 12.539
- type: ndcg_at_10
value: 8.676
- type: ndcg_at_100
value: 8.092
- type: ndcg_at_1000
value: 16.375999999999998
- type: ndcg_at_3
value: 10.615
- type: ndcg_at_5
value: 9.690999999999999
- type: precision_at_1
value: 13.622
- type: precision_at_10
value: 6.315999999999999
- type: precision_at_100
value: 2.486
- type: precision_at_1000
value: 1.317
- type: precision_at_3
value: 10.113999999999999
- type: precision_at_5
value: 8.235000000000001
- type: recall_at_1
value: 1.4040000000000001
- type: recall_at_10
value: 3.794
- type: recall_at_100
value: 9.71
- type: recall_at_1000
value: 37.476
- type: recall_at_3
value: 2.197
- type: recall_at_5
value: 2.929
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 1.299
- type: map_at_10
value: 2.7279999999999998
- type: map_at_100
value: 3.065
- type: map_at_1000
value: 3.118
- type: map_at_3
value: 2.182
- type: map_at_5
value: 2.48
- type: mrr_at_1
value: 1.6219999999999999
- type: mrr_at_10
value: 3.237
- type: mrr_at_100
value: 3.5749999999999997
- type: mrr_at_1000
value: 3.626
- type: mrr_at_3
value: 2.6550000000000002
- type: mrr_at_5
value: 2.9770000000000003
- type: ndcg_at_1
value: 1.6219999999999999
- type: ndcg_at_10
value: 3.768
- type: ndcg_at_100
value: 5.721
- type: ndcg_at_1000
value: 7.346
- type: ndcg_at_3
value: 2.604
- type: ndcg_at_5
value: 3.1530000000000005
- type: precision_at_1
value: 1.6219999999999999
- type: precision_at_10
value: 0.776
- type: precision_at_100
value: 0.194
- type: precision_at_1000
value: 0.034999999999999996
- type: precision_at_3
value: 1.371
- type: precision_at_5
value: 1.1119999999999999
- type: recall_at_1
value: 1.299
- type: recall_at_10
value: 6.54
- type: recall_at_100
value: 16.014999999999997
- type: recall_at_1000
value: 28.776000000000003
- type: recall_at_3
value: 3.37
- type: recall_at_5
value: 4.676
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 50.827
- type: map_at_10
value: 60.903
- type: map_at_100
value: 61.67700000000001
- type: map_at_1000
value: 61.729
- type: map_at_3
value: 58.411
- type: map_at_5
value: 59.854
- type: mrr_at_1
value: 58.52
- type: mrr_at_10
value: 65.53999999999999
- type: mrr_at_100
value: 65.94
- type: mrr_at_1000
value: 65.962
- type: mrr_at_3
value: 63.905
- type: mrr_at_5
value: 64.883
- type: ndcg_at_1
value: 58.51
- type: ndcg_at_10
value: 65.458
- type: ndcg_at_100
value: 68.245
- type: ndcg_at_1000
value: 69.244
- type: ndcg_at_3
value: 61.970000000000006
- type: ndcg_at_5
value: 63.664
- type: precision_at_1
value: 58.51
- type: precision_at_10
value: 9.873999999999999
- type: precision_at_100
value: 1.24
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 26.650000000000002
- type: precision_at_5
value: 17.666
- type: recall_at_1
value: 50.827
- type: recall_at_10
value: 74.13300000000001
- type: recall_at_100
value: 85.724
- type: recall_at_1000
value: 92.551
- type: recall_at_3
value: 64.122
- type: recall_at_5
value: 68.757
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 15.106948858308094
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 30.968103547012337
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 1.4749999999999999
- type: map_at_10
value: 3.434
- type: map_at_100
value: 4.139
- type: map_at_1000
value: 4.312
- type: map_at_3
value: 2.554
- type: map_at_5
value: 2.999
- type: mrr_at_1
value: 7.3
- type: mrr_at_10
value: 12.031
- type: mrr_at_100
value: 12.97
- type: mrr_at_1000
value: 13.092
- type: mrr_at_3
value: 10.217
- type: mrr_at_5
value: 11.172
- type: ndcg_at_1
value: 7.3
- type: ndcg_at_10
value: 6.406000000000001
- type: ndcg_at_100
value: 10.302999999999999
- type: ndcg_at_1000
value: 14.791000000000002
- type: ndcg_at_3
value: 5.982
- type: ndcg_at_5
value: 5.274
- type: precision_at_1
value: 7.3
- type: precision_at_10
value: 3.37
- type: precision_at_100
value: 0.914
- type: precision_at_1000
value: 0.201
- type: precision_at_3
value: 5.567
- type: precision_at_5
value: 4.68
- type: recall_at_1
value: 1.4749999999999999
- type: recall_at_10
value: 6.79
- type: recall_at_100
value: 18.55
- type: recall_at_1000
value: 40.842
- type: recall_at_3
value: 3.36
- type: recall_at_5
value: 4.72
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 59.464420082440526
- type: cos_sim_spearman
value: 54.319988337451704
- type: euclidean_pearson
value: 57.042312873314295
- type: euclidean_spearman
value: 54.31996388571784
- type: manhattan_pearson
value: 57.078786802338435
- type: manhattan_spearman
value: 54.323312153757456
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 60.08105871689929
- type: cos_sim_spearman
value: 57.53293836132526
- type: euclidean_pearson
value: 57.69984777047449
- type: euclidean_spearman
value: 57.534154476967345
- type: manhattan_pearson
value: 57.661519973840946
- type: manhattan_spearman
value: 57.447636234309854
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 57.12692049687197
- type: cos_sim_spearman
value: 57.4759438730368
- type: euclidean_pearson
value: 58.41782334532981
- type: euclidean_spearman
value: 57.47613008122331
- type: manhattan_pearson
value: 58.41335837274888
- type: manhattan_spearman
value: 57.465936751045746
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 53.84165004759765
- type: cos_sim_spearman
value: 52.32112048731462
- type: euclidean_pearson
value: 52.790405817119094
- type: euclidean_spearman
value: 52.32112268628659
- type: manhattan_pearson
value: 52.804939090733804
- type: manhattan_spearman
value: 52.31750678935915
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 63.555819199866036
- type: cos_sim_spearman
value: 64.05841117331784
- type: euclidean_pearson
value: 63.659991414541786
- type: euclidean_spearman
value: 64.05841071779129
- type: manhattan_pearson
value: 63.6915442281397
- type: manhattan_spearman
value: 64.07728265258595
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 63.03024268207247
- type: cos_sim_spearman
value: 63.53003651570799
- type: euclidean_pearson
value: 64.09620752390686
- type: euclidean_spearman
value: 63.530036058718096
- type: manhattan_pearson
value: 64.07468313413827
- type: manhattan_spearman
value: 63.526415746516285
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 70.18862439704168
- type: cos_sim_spearman
value: 70.97966882821095
- type: euclidean_pearson
value: 71.04858522892525
- type: euclidean_spearman
value: 70.97966882821095
- type: manhattan_pearson
value: 71.0777838495318
- type: manhattan_spearman
value: 71.08141859528023
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 49.680993011354964
- type: cos_sim_spearman
value: 55.990646519065734
- type: euclidean_pearson
value: 52.53309325175639
- type: euclidean_spearman
value: 55.990646519065734
- type: manhattan_pearson
value: 52.55809108662631
- type: manhattan_spearman
value: 55.65236114980215
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 61.18394695826386
- type: cos_sim_spearman
value: 60.77402126712771
- type: euclidean_pearson
value: 61.202070794992736
- type: euclidean_spearman
value: 60.77402126712771
- type: manhattan_pearson
value: 61.2505175850885
- type: manhattan_spearman
value: 60.77213463387346
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 58.251838750265804
- type: mrr
value: 81.27406090641384
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.833
- type: map_at_10
value: 11.219999999999999
- type: map_at_100
value: 12.086
- type: map_at_1000
value: 12.200999999999999
- type: map_at_3
value: 10.056
- type: map_at_5
value: 10.664
- type: mrr_at_1
value: 9.0
- type: mrr_at_10
value: 11.875
- type: mrr_at_100
value: 12.757
- type: mrr_at_1000
value: 12.864
- type: mrr_at_3
value: 10.722
- type: mrr_at_5
value: 11.322000000000001
- type: ndcg_at_1
value: 9.0
- type: ndcg_at_10
value: 13.001
- type: ndcg_at_100
value: 17.784
- type: ndcg_at_1000
value: 21.695
- type: ndcg_at_3
value: 10.63
- type: ndcg_at_5
value: 11.693000000000001
- type: precision_at_1
value: 9.0
- type: precision_at_10
value: 2.0
- type: precision_at_100
value: 0.46299999999999997
- type: precision_at_1000
value: 0.083
- type: precision_at_3
value: 4.222
- type: precision_at_5
value: 3.1329999999999996
- type: recall_at_1
value: 8.833
- type: recall_at_10
value: 18.0
- type: recall_at_100
value: 41.211
- type: recall_at_1000
value: 73.14399999999999
- type: recall_at_3
value: 11.5
- type: recall_at_5
value: 14.083000000000002
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.44455445544554
- type: cos_sim_ap
value: 68.76115592640271
- type: cos_sim_f1
value: 67.29805013927577
- type: cos_sim_precision
value: 75.9748427672956
- type: cos_sim_recall
value: 60.4
- type: dot_accuracy
value: 99.44455445544554
- type: dot_ap
value: 68.76115778951738
- type: dot_f1
value: 67.29805013927577
- type: dot_precision
value: 75.9748427672956
- type: dot_recall
value: 60.4
- type: euclidean_accuracy
value: 99.44455445544554
- type: euclidean_ap
value: 68.76115530286063
- type: euclidean_f1
value: 67.29805013927577
- type: euclidean_precision
value: 75.9748427672956
- type: euclidean_recall
value: 60.4
- type: manhattan_accuracy
value: 99.44653465346535
- type: manhattan_ap
value: 68.76446446842253
- type: manhattan_f1
value: 67.34926052332196
- type: manhattan_precision
value: 78.10026385224275
- type: manhattan_recall
value: 59.199999999999996
- type: max_accuracy
value: 99.44653465346535
- type: max_ap
value: 68.76446446842253
- type: max_f1
value: 67.34926052332196
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 28.486032726226675
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 29.654061810103283
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 39.81455140801657
- type: mrr
value: 40.09712407690349
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.05
- type: map_at_10
value: 0.191
- type: map_at_100
value: 0.346
- type: map_at_1000
value: 0.553
- type: map_at_3
value: 0.11299999999999999
- type: map_at_5
value: 0.148
- type: mrr_at_1
value: 22.0
- type: mrr_at_10
value: 30.091
- type: mrr_at_100
value: 31.241999999999997
- type: mrr_at_1000
value: 31.298
- type: mrr_at_3
value: 28.000000000000004
- type: mrr_at_5
value: 28.999999999999996
- type: ndcg_at_1
value: 18.0
- type: ndcg_at_10
value: 12.501000000000001
- type: ndcg_at_100
value: 5.605
- type: ndcg_at_1000
value: 4.543
- type: ndcg_at_3
value: 17.531
- type: ndcg_at_5
value: 15.254999999999999
- type: precision_at_1
value: 22.0
- type: precision_at_10
value: 12.6
- type: precision_at_100
value: 5.06
- type: precision_at_1000
value: 2.028
- type: precision_at_3
value: 20.666999999999998
- type: precision_at_5
value: 16.8
- type: recall_at_1
value: 0.05
- type: recall_at_10
value: 0.267
- type: recall_at_100
value: 1.102
- type: recall_at_1000
value: 4.205
- type: recall_at_3
value: 0.134
- type: recall_at_5
value: 0.182
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.45199999999999996
- type: map_at_10
value: 1.986
- type: map_at_100
value: 3.887
- type: map_at_1000
value: 4.5809999999999995
- type: map_at_3
value: 0.9299999999999999
- type: map_at_5
value: 1.287
- type: mrr_at_1
value: 8.163
- type: mrr_at_10
value: 16.152
- type: mrr_at_100
value: 17.187
- type: mrr_at_1000
value: 17.301
- type: mrr_at_3
value: 11.224
- type: mrr_at_5
value: 12.653
- type: ndcg_at_1
value: 4.082
- type: ndcg_at_10
value: 6.687
- type: ndcg_at_100
value: 13.158
- type: ndcg_at_1000
value: 22.259
- type: ndcg_at_3
value: 5.039
- type: ndcg_at_5
value: 5.519
- type: precision_at_1
value: 8.163
- type: precision_at_10
value: 8.163
- type: precision_at_100
value: 3.51
- type: precision_at_1000
value: 0.9159999999999999
- type: precision_at_3
value: 7.483
- type: precision_at_5
value: 7.3469999999999995
- type: recall_at_1
value: 0.45199999999999996
- type: recall_at_10
value: 5.27
- type: recall_at_100
value: 20.75
- type: recall_at_1000
value: 49.236999999999995
- type: recall_at_3
value: 1.28
- type: recall_at_5
value: 2.045
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 57.08740000000001
- type: ap
value: 9.092681400063896
- type: f1
value: 43.966684273361125
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 42.314657611771366
- type: f1
value: 42.2349043058169
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 15.71319288909283
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 78.84007867914407
- type: cos_sim_ap
value: 42.2183603452187
- type: cos_sim_f1
value: 43.1781412906705
- type: cos_sim_precision
value: 32.74263904034896
- type: cos_sim_recall
value: 63.377308707124016
- type: dot_accuracy
value: 78.84007867914407
- type: dot_ap
value: 42.21836359699547
- type: dot_f1
value: 43.1781412906705
- type: dot_precision
value: 32.74263904034896
- type: dot_recall
value: 63.377308707124016
- type: euclidean_accuracy
value: 78.84007867914407
- type: euclidean_ap
value: 42.218363575958854
- type: euclidean_f1
value: 43.1781412906705
- type: euclidean_precision
value: 32.74263904034896
- type: euclidean_recall
value: 63.377308707124016
- type: manhattan_accuracy
value: 78.79239434940692
- type: manhattan_ap
value: 42.178124350579
- type: manhattan_f1
value: 43.16231513602337
- type: manhattan_precision
value: 32.99832495812395
- type: manhattan_recall
value: 62.37467018469657
- type: max_accuracy
value: 78.84007867914407
- type: max_ap
value: 42.21836359699547
- type: max_f1
value: 43.1781412906705
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 82.51445647533667
- type: cos_sim_ap
value: 69.65701766911302
- type: cos_sim_f1
value: 62.92060699362217
- type: cos_sim_precision
value: 60.046173219532676
- type: cos_sim_recall
value: 66.08407761010163
- type: dot_accuracy
value: 82.51445647533667
- type: dot_ap
value: 69.6569952654014
- type: dot_f1
value: 62.92060699362217
- type: dot_precision
value: 60.046173219532676
- type: dot_recall
value: 66.08407761010163
- type: euclidean_accuracy
value: 82.51445647533667
- type: euclidean_ap
value: 69.65697749857492
- type: euclidean_f1
value: 62.92060699362217
- type: euclidean_precision
value: 60.046173219532676
- type: euclidean_recall
value: 66.08407761010163
- type: manhattan_accuracy
value: 82.52221834128925
- type: manhattan_ap
value: 69.65965534790995
- type: manhattan_f1
value: 62.865817064991006
- type: manhattan_precision
value: 58.04811265401917
- type: manhattan_recall
value: 68.55558977517708
- type: max_accuracy
value: 82.52221834128925
- type: max_ap
value: 69.65965534790995
- type: max_f1
value: 62.92060699362217
---
|
sam1120/dropoff-utcustom-train-SF-RGBD-b5_7 | sam1120 | 2024-02-12T13:58:42Z | 148 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2024-02-12T13:25:26Z | ---
license: other
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: dropoff-utcustom-train-SF-RGBD-b5_7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dropoff-utcustom-train-SF-RGBD-b5_7
This model is a fine-tuned version of [nvidia/mit-b5](https://huggingface.co/nvidia/mit-b5) on the sam1120/dropoff-utcustom-TRAIN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1296
- Mean Iou: 0.6242
- Mean Accuracy: 0.6623
- Overall Accuracy: 0.9652
- Accuracy Unlabeled: nan
- Accuracy Dropoff: 0.3319
- Accuracy Undropoff: 0.9926
- Iou Unlabeled: nan
- Iou Dropoff: 0.2838
- Iou Undropoff: 0.9647
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 120
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Dropoff | Accuracy Undropoff | Iou Unlabeled | Iou Dropoff | Iou Undropoff |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:----------------:|:------------------:|:-------------:|:-----------:|:-------------:|
| 0.9278 | 5.0 | 10 | 0.8454 | 0.3197 | 0.5545 | 0.8788 | nan | 0.2009 | 0.9082 | 0.0 | 0.0807 | 0.8785 |
| 0.5551 | 10.0 | 20 | 0.4668 | 0.3221 | 0.5042 | 0.9540 | nan | 0.0135 | 0.9948 | 0.0 | 0.0122 | 0.9540 |
| 0.3667 | 15.0 | 30 | 0.3354 | 0.3218 | 0.5035 | 0.9570 | nan | 0.0088 | 0.9982 | 0.0 | 0.0085 | 0.9570 |
| 0.2402 | 20.0 | 40 | 0.2678 | 0.5985 | 0.6492 | 0.9587 | nan | 0.3116 | 0.9868 | nan | 0.2388 | 0.9582 |
| 0.1562 | 25.0 | 50 | 0.2101 | 0.6240 | 0.6719 | 0.9631 | nan | 0.3544 | 0.9895 | nan | 0.2854 | 0.9625 |
| 0.1159 | 30.0 | 60 | 0.1704 | 0.6262 | 0.6641 | 0.9654 | nan | 0.3353 | 0.9928 | nan | 0.2875 | 0.9650 |
| 0.0869 | 35.0 | 70 | 0.1443 | 0.6380 | 0.6817 | 0.9657 | nan | 0.3720 | 0.9915 | nan | 0.3108 | 0.9652 |
| 0.079 | 40.0 | 80 | 0.1350 | 0.6072 | 0.6360 | 0.9654 | nan | 0.2766 | 0.9953 | nan | 0.2494 | 0.9650 |
| 0.0647 | 45.0 | 90 | 0.1370 | 0.5800 | 0.6031 | 0.9643 | nan | 0.2090 | 0.9971 | nan | 0.1959 | 0.9640 |
| 0.0587 | 50.0 | 100 | 0.1336 | 0.6276 | 0.6796 | 0.9628 | nan | 0.3707 | 0.9885 | nan | 0.2929 | 0.9622 |
| 0.0575 | 55.0 | 110 | 0.1313 | 0.6189 | 0.6531 | 0.9654 | nan | 0.3126 | 0.9937 | nan | 0.2729 | 0.9649 |
| 0.0527 | 60.0 | 120 | 0.1298 | 0.6252 | 0.6655 | 0.9648 | nan | 0.3391 | 0.9920 | nan | 0.2860 | 0.9643 |
| 0.0491 | 65.0 | 130 | 0.1313 | 0.6110 | 0.6492 | 0.9635 | nan | 0.3063 | 0.9920 | nan | 0.2589 | 0.9631 |
| 0.0441 | 70.0 | 140 | 0.1295 | 0.6103 | 0.6429 | 0.9648 | nan | 0.2919 | 0.9939 | nan | 0.2562 | 0.9643 |
| 0.0426 | 75.0 | 150 | 0.1233 | 0.6271 | 0.6633 | 0.9659 | nan | 0.3333 | 0.9933 | nan | 0.2887 | 0.9654 |
| 0.0477 | 80.0 | 160 | 0.1286 | 0.6255 | 0.6629 | 0.9655 | nan | 0.3328 | 0.9929 | nan | 0.2861 | 0.9650 |
| 0.039 | 85.0 | 170 | 0.1265 | 0.6380 | 0.6824 | 0.9656 | nan | 0.3735 | 0.9913 | nan | 0.3109 | 0.9650 |
| 0.0378 | 90.0 | 180 | 0.1309 | 0.6185 | 0.6543 | 0.9650 | nan | 0.3154 | 0.9932 | nan | 0.2725 | 0.9645 |
| 0.0362 | 95.0 | 190 | 0.1266 | 0.6311 | 0.6715 | 0.9655 | nan | 0.3508 | 0.9922 | nan | 0.2973 | 0.9650 |
| 0.0394 | 100.0 | 200 | 0.1307 | 0.6274 | 0.6635 | 0.9659 | nan | 0.3337 | 0.9934 | nan | 0.2894 | 0.9655 |
| 0.0362 | 105.0 | 210 | 0.1271 | 0.6366 | 0.6789 | 0.9658 | nan | 0.3661 | 0.9918 | nan | 0.3080 | 0.9653 |
| 0.0361 | 110.0 | 220 | 0.1274 | 0.6317 | 0.6736 | 0.9653 | nan | 0.3554 | 0.9918 | nan | 0.2987 | 0.9648 |
| 0.0353 | 115.0 | 230 | 0.1290 | 0.6216 | 0.6579 | 0.9652 | nan | 0.3228 | 0.9931 | nan | 0.2784 | 0.9647 |
| 0.0344 | 120.0 | 240 | 0.1296 | 0.6242 | 0.6623 | 0.9652 | nan | 0.3319 | 0.9926 | nan | 0.2838 | 0.9647 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
sam1120/dropoff-utcustom-train-SF-RGBD-b5_6 | sam1120 | 2024-02-12T13:58:06Z | 145 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2024-02-12T13:25:25Z | ---
license: other
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: dropoff-utcustom-train-SF-RGBD-b5_6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dropoff-utcustom-train-SF-RGBD-b5_6
This model is a fine-tuned version of [nvidia/mit-b5](https://huggingface.co/nvidia/mit-b5) on the sam1120/dropoff-utcustom-TRAIN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1429
- Mean Iou: 0.6443
- Mean Accuracy: 0.6853
- Overall Accuracy: 0.9669
- Accuracy Unlabeled: nan
- Accuracy Dropoff: 0.3782
- Accuracy Undropoff: 0.9925
- Iou Unlabeled: nan
- Iou Dropoff: 0.3223
- Iou Undropoff: 0.9664
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 120
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Dropoff | Accuracy Undropoff | Iou Unlabeled | Iou Dropoff | Iou Undropoff |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:----------------:|:------------------:|:-------------:|:-----------:|:-------------:|
| 1.159 | 5.0 | 10 | 1.0040 | 0.2283 | 0.5676 | 0.6267 | nan | 0.5031 | 0.6321 | 0.0 | 0.0644 | 0.6203 |
| 0.8345 | 10.0 | 20 | 0.7480 | 0.3236 | 0.5320 | 0.9158 | nan | 0.1134 | 0.9506 | 0.0 | 0.0555 | 0.9154 |
| 0.5406 | 15.0 | 30 | 0.5477 | 0.3223 | 0.5049 | 0.9513 | nan | 0.0179 | 0.9918 | 0.0 | 0.0157 | 0.9513 |
| 0.3695 | 20.0 | 40 | 0.4590 | 0.3215 | 0.5036 | 0.9519 | nan | 0.0146 | 0.9926 | 0.0 | 0.0125 | 0.9519 |
| 0.3053 | 25.0 | 50 | 0.3790 | 0.3196 | 0.5001 | 0.9565 | nan | 0.0023 | 0.9979 | 0.0 | 0.0022 | 0.9565 |
| 0.2436 | 30.0 | 60 | 0.3303 | 0.4812 | 0.5020 | 0.9568 | nan | 0.0059 | 0.9981 | nan | 0.0056 | 0.9568 |
| 0.2148 | 35.0 | 70 | 0.2739 | 0.4794 | 0.5002 | 0.9580 | nan | 0.0008 | 0.9996 | nan | 0.0008 | 0.9580 |
| 0.1983 | 40.0 | 80 | 0.2348 | 0.5079 | 0.5284 | 0.9595 | nan | 0.0582 | 0.9986 | nan | 0.0564 | 0.9594 |
| 0.1784 | 45.0 | 90 | 0.2178 | 0.6064 | 0.6440 | 0.9631 | nan | 0.2960 | 0.9920 | nan | 0.2501 | 0.9626 |
| 0.1631 | 50.0 | 100 | 0.1943 | 0.6223 | 0.6811 | 0.9607 | nan | 0.3760 | 0.9861 | nan | 0.2846 | 0.9601 |
| 0.1468 | 55.0 | 110 | 0.1759 | 0.6206 | 0.6731 | 0.9617 | nan | 0.3583 | 0.9879 | nan | 0.2801 | 0.9611 |
| 0.1353 | 60.0 | 120 | 0.1657 | 0.6014 | 0.6335 | 0.9639 | nan | 0.2731 | 0.9939 | nan | 0.2393 | 0.9635 |
| 0.1474 | 65.0 | 130 | 0.1590 | 0.5943 | 0.6228 | 0.9641 | nan | 0.2505 | 0.9951 | nan | 0.2249 | 0.9637 |
| 0.1172 | 70.0 | 140 | 0.1562 | 0.6272 | 0.6662 | 0.9653 | nan | 0.3400 | 0.9924 | nan | 0.2896 | 0.9648 |
| 0.1169 | 75.0 | 150 | 0.1538 | 0.6302 | 0.6696 | 0.9656 | nan | 0.3467 | 0.9925 | nan | 0.2954 | 0.9651 |
| 0.1263 | 80.0 | 160 | 0.1540 | 0.6372 | 0.6784 | 0.9661 | nan | 0.3645 | 0.9922 | nan | 0.3089 | 0.9656 |
| 0.1028 | 85.0 | 170 | 0.1512 | 0.6462 | 0.6948 | 0.9659 | nan | 0.3992 | 0.9904 | nan | 0.3271 | 0.9653 |
| 0.1163 | 90.0 | 180 | 0.1493 | 0.6469 | 0.6932 | 0.9663 | nan | 0.3953 | 0.9911 | nan | 0.3280 | 0.9658 |
| 0.0998 | 95.0 | 190 | 0.1481 | 0.6457 | 0.6894 | 0.9666 | nan | 0.3869 | 0.9918 | nan | 0.3253 | 0.9661 |
| 0.0997 | 100.0 | 200 | 0.1465 | 0.6454 | 0.6893 | 0.9665 | nan | 0.3869 | 0.9917 | nan | 0.3247 | 0.9660 |
| 0.0998 | 105.0 | 210 | 0.1473 | 0.6488 | 0.6937 | 0.9668 | nan | 0.3958 | 0.9916 | nan | 0.3313 | 0.9662 |
| 0.1003 | 110.0 | 220 | 0.1437 | 0.6401 | 0.6774 | 0.9671 | nan | 0.3614 | 0.9934 | nan | 0.3136 | 0.9666 |
| 0.0932 | 115.0 | 230 | 0.1434 | 0.6469 | 0.6898 | 0.9669 | nan | 0.3876 | 0.9920 | nan | 0.3275 | 0.9664 |
| 0.0942 | 120.0 | 240 | 0.1429 | 0.6443 | 0.6853 | 0.9669 | nan | 0.3782 | 0.9925 | nan | 0.3223 | 0.9664 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
arekpaterak/Reinforce-Pixelcopter-PLE-v0 | arekpaterak | 2024-02-12T13:57:28Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-12T13:07:51Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 32.30 +/- 17.93
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
sam1120/dropoff-utcustom-train-SF-RGBD-b5_4 | sam1120 | 2024-02-12T13:56:36Z | 145 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2024-02-12T13:24:40Z | ---
license: other
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: dropoff-utcustom-train-SF-RGBD-b5_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dropoff-utcustom-train-SF-RGBD-b5_4
This model is a fine-tuned version of [nvidia/mit-b5](https://huggingface.co/nvidia/mit-b5) on the sam1120/dropoff-utcustom-TRAIN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2351
- Mean Iou: 0.4792
- Mean Accuracy: 0.5
- Overall Accuracy: 0.9584
- Accuracy Unlabeled: nan
- Accuracy Dropoff: 0.0
- Accuracy Undropoff: 1.0
- Iou Unlabeled: nan
- Iou Dropoff: 0.0
- Iou Undropoff: 0.9584
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 120
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Dropoff | Accuracy Undropoff | Iou Unlabeled | Iou Dropoff | Iou Undropoff |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:----------------:|:------------------:|:-------------:|:-----------:|:-------------:|
| 1.0114 | 5.0 | 10 | 1.0037 | 0.2459 | 0.4345 | 0.7074 | nan | 0.1368 | 0.7322 | 0.0 | 0.0286 | 0.7089 |
| 0.9088 | 10.0 | 20 | 0.8245 | 0.3119 | 0.5046 | 0.8887 | nan | 0.0857 | 0.9235 | 0.0 | 0.0460 | 0.8897 |
| 0.8029 | 15.0 | 30 | 0.6620 | 0.3157 | 0.4998 | 0.9214 | nan | 0.0399 | 0.9596 | 0.0 | 0.0253 | 0.9219 |
| 0.6935 | 20.0 | 40 | 0.5662 | 0.3154 | 0.4959 | 0.9309 | nan | 0.0214 | 0.9704 | 0.0 | 0.0151 | 0.9311 |
| 0.635 | 25.0 | 50 | 0.5018 | 0.3175 | 0.4978 | 0.9401 | nan | 0.0153 | 0.9803 | 0.0 | 0.0121 | 0.9404 |
| 0.5579 | 30.0 | 60 | 0.4701 | 0.3178 | 0.4978 | 0.9422 | nan | 0.0131 | 0.9825 | 0.0 | 0.0111 | 0.9423 |
| 0.5086 | 35.0 | 70 | 0.4403 | 0.3181 | 0.4977 | 0.9459 | nan | 0.0088 | 0.9866 | 0.0 | 0.0080 | 0.9461 |
| 0.472 | 40.0 | 80 | 0.4328 | 0.3177 | 0.4971 | 0.9471 | nan | 0.0063 | 0.9879 | 0.0 | 0.0059 | 0.9473 |
| 0.4484 | 45.0 | 90 | 0.4136 | 0.3184 | 0.4981 | 0.9506 | nan | 0.0046 | 0.9916 | 0.0 | 0.0044 | 0.9508 |
| 0.4026 | 50.0 | 100 | 0.4013 | 0.3186 | 0.4985 | 0.9516 | nan | 0.0043 | 0.9926 | 0.0 | 0.0042 | 0.9517 |
| 0.3873 | 55.0 | 110 | 0.3621 | 0.3189 | 0.4991 | 0.9557 | nan | 0.0010 | 0.9971 | 0.0 | 0.0009 | 0.9557 |
| 0.3549 | 60.0 | 120 | 0.3479 | 0.3189 | 0.4992 | 0.9564 | nan | 0.0004 | 0.9979 | 0.0 | 0.0004 | 0.9564 |
| 0.3358 | 65.0 | 130 | 0.3282 | 0.3191 | 0.4994 | 0.9571 | nan | 0.0001 | 0.9986 | 0.0 | 0.0001 | 0.9571 |
| 0.3146 | 70.0 | 140 | 0.3141 | 0.3193 | 0.4996 | 0.9577 | nan | 0.0000 | 0.9993 | 0.0 | 0.0000 | 0.9577 |
| 0.3116 | 75.0 | 150 | 0.2941 | 0.3194 | 0.4999 | 0.9582 | nan | 0.0 | 0.9998 | 0.0 | 0.0 | 0.9582 |
| 0.3151 | 80.0 | 160 | 0.2809 | 0.3195 | 0.5000 | 0.9584 | nan | 0.0 | 0.9999 | 0.0 | 0.0 | 0.9584 |
| 0.2778 | 85.0 | 170 | 0.2750 | 0.3195 | 0.5000 | 0.9584 | nan | 0.0 | 1.0000 | 0.0 | 0.0 | 0.9584 |
| 0.2753 | 90.0 | 180 | 0.2615 | 0.3195 | 0.5000 | 0.9584 | nan | 0.0 | 1.0000 | 0.0 | 0.0 | 0.9584 |
| 0.2809 | 95.0 | 190 | 0.2547 | 0.4792 | 0.5 | 0.9584 | nan | 0.0 | 1.0 | nan | 0.0 | 0.9584 |
| 0.2606 | 100.0 | 200 | 0.2464 | 0.4792 | 0.5 | 0.9584 | nan | 0.0 | 1.0 | nan | 0.0 | 0.9584 |
| 0.2563 | 105.0 | 210 | 0.2459 | 0.4792 | 0.5 | 0.9584 | nan | 0.0 | 1.0 | nan | 0.0 | 0.9584 |
| 0.2454 | 110.0 | 220 | 0.2393 | 0.4792 | 0.5 | 0.9584 | nan | 0.0 | 1.0 | nan | 0.0 | 0.9584 |
| 0.2707 | 115.0 | 230 | 0.2368 | 0.4792 | 0.5 | 0.9584 | nan | 0.0 | 1.0 | nan | 0.0 | 0.9584 |
| 0.2433 | 120.0 | 240 | 0.2351 | 0.4792 | 0.5 | 0.9584 | nan | 0.0 | 1.0 | nan | 0.0 | 0.9584 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Shijia/furina_seed42_eng_amh_hau_roman | Shijia | 2024-02-12T13:54:35Z | 101 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:yihongLiu/furina",
"base_model:finetune:yihongLiu/furina",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-12T13:53:48Z | ---
base_model: yihongLiu/furina
tags:
- generated_from_trainer
model-index:
- name: furina_seed42_eng_amh_hau_roman
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# furina_seed42_eng_amh_hau_roman
This model is a fine-tuned version of [yihongLiu/furina](https://huggingface.co/yihongLiu/furina) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0233
- Spearman Corr: 0.7621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Spearman Corr |
|:-------------:|:-----:|:----:|:---------------:|:-------------:|
| No log | 0.58 | 200 | 0.0306 | 0.6454 |
| No log | 1.15 | 400 | 0.0353 | 0.6854 |
| No log | 1.73 | 600 | 0.0298 | 0.7055 |
| 0.0458 | 2.3 | 800 | 0.0307 | 0.7105 |
| 0.0458 | 2.88 | 1000 | 0.0263 | 0.7299 |
| 0.0458 | 3.45 | 1200 | 0.0273 | 0.7357 |
| 0.0222 | 4.03 | 1400 | 0.0255 | 0.7374 |
| 0.0222 | 4.6 | 1600 | 0.0268 | 0.7398 |
| 0.0222 | 5.18 | 1800 | 0.0316 | 0.7371 |
| 0.0222 | 5.76 | 2000 | 0.0245 | 0.7445 |
| 0.0155 | 6.33 | 2200 | 0.0264 | 0.7484 |
| 0.0155 | 6.91 | 2400 | 0.0311 | 0.7549 |
| 0.0155 | 7.48 | 2600 | 0.0223 | 0.7585 |
| 0.0112 | 8.06 | 2800 | 0.0257 | 0.7483 |
| 0.0112 | 8.63 | 3000 | 0.0240 | 0.7507 |
| 0.0112 | 9.21 | 3200 | 0.0275 | 0.7609 |
| 0.0112 | 9.78 | 3400 | 0.0265 | 0.7565 |
| 0.0086 | 10.36 | 3600 | 0.0250 | 0.7534 |
| 0.0086 | 10.94 | 3800 | 0.0285 | 0.7577 |
| 0.0086 | 11.51 | 4000 | 0.0225 | 0.7625 |
| 0.007 | 12.09 | 4200 | 0.0233 | 0.7621 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
|
sam1120/dropoff-utcustom-train-SF-RGBD-b5_2 | sam1120 | 2024-02-12T13:41:56Z | 145 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2024-02-12T13:23:34Z | ---
license: other
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: dropoff-utcustom-train-SF-RGBD-b5_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dropoff-utcustom-train-SF-RGBD-b5_2
This model is a fine-tuned version of [nvidia/mit-b5](https://huggingface.co/nvidia/mit-b5) on the sam1120/dropoff-utcustom-TRAIN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4198
- Mean Iou: 0.3194
- Mean Accuracy: 0.4998
- Overall Accuracy: 0.9558
- Accuracy Unlabeled: nan
- Accuracy Dropoff: 0.0023
- Accuracy Undropoff: 0.9972
- Iou Unlabeled: 0.0
- Iou Dropoff: 0.0022
- Iou Undropoff: 0.9558
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 120
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Dropoff | Accuracy Undropoff | Iou Unlabeled | Iou Dropoff | Iou Undropoff |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:----------------:|:------------------:|:-------------:|:-----------:|:-------------:|
| 0.989 | 5.0 | 10 | 1.0190 | 0.2162 | 0.5831 | 0.5879 | nan | 0.5779 | 0.5883 | 0.0 | 0.0657 | 0.5829 |
| 0.9092 | 10.0 | 20 | 0.8686 | 0.3164 | 0.5199 | 0.8922 | nan | 0.1137 | 0.9260 | 0.0 | 0.0539 | 0.8953 |
| 0.8483 | 15.0 | 30 | 0.7438 | 0.3256 | 0.5234 | 0.9219 | nan | 0.0888 | 0.9581 | 0.0 | 0.0545 | 0.9224 |
| 0.7856 | 20.0 | 40 | 0.6571 | 0.3182 | 0.5013 | 0.9336 | nan | 0.0297 | 0.9728 | 0.0 | 0.0210 | 0.9335 |
| 0.7459 | 25.0 | 50 | 0.6144 | 0.3164 | 0.4980 | 0.9324 | nan | 0.0242 | 0.9718 | 0.0 | 0.0168 | 0.9324 |
| 0.7027 | 30.0 | 60 | 0.5861 | 0.3168 | 0.4975 | 0.9351 | nan | 0.0202 | 0.9748 | 0.0 | 0.0151 | 0.9353 |
| 0.6827 | 35.0 | 70 | 0.5568 | 0.3171 | 0.4975 | 0.9391 | nan | 0.0159 | 0.9791 | 0.0 | 0.0122 | 0.9391 |
| 0.6362 | 40.0 | 80 | 0.5405 | 0.3179 | 0.4982 | 0.9424 | nan | 0.0138 | 0.9827 | 0.0 | 0.0112 | 0.9425 |
| 0.6098 | 45.0 | 90 | 0.5192 | 0.3174 | 0.4971 | 0.9449 | nan | 0.0087 | 0.9855 | 0.0 | 0.0073 | 0.9449 |
| 0.5946 | 50.0 | 100 | 0.5025 | 0.3179 | 0.4978 | 0.9475 | nan | 0.0072 | 0.9883 | 0.0 | 0.0062 | 0.9477 |
| 0.5868 | 55.0 | 110 | 0.4943 | 0.3179 | 0.4976 | 0.9490 | nan | 0.0052 | 0.9900 | 0.0 | 0.0046 | 0.9491 |
| 0.5557 | 60.0 | 120 | 0.4798 | 0.3184 | 0.4983 | 0.9505 | nan | 0.0051 | 0.9915 | 0.0 | 0.0045 | 0.9506 |
| 0.5327 | 65.0 | 130 | 0.4736 | 0.3184 | 0.4983 | 0.9514 | nan | 0.0041 | 0.9925 | 0.0 | 0.0038 | 0.9514 |
| 0.525 | 70.0 | 140 | 0.4657 | 0.3187 | 0.4987 | 0.9526 | nan | 0.0038 | 0.9937 | 0.0 | 0.0035 | 0.9526 |
| 0.5266 | 75.0 | 150 | 0.4528 | 0.3190 | 0.4992 | 0.9534 | nan | 0.0037 | 0.9946 | 0.0 | 0.0034 | 0.9535 |
| 0.5139 | 80.0 | 160 | 0.4538 | 0.3189 | 0.4991 | 0.9533 | nan | 0.0037 | 0.9945 | 0.0 | 0.0035 | 0.9534 |
| 0.5128 | 85.0 | 170 | 0.4460 | 0.3192 | 0.4995 | 0.9543 | nan | 0.0033 | 0.9956 | 0.0 | 0.0031 | 0.9543 |
| 0.4901 | 90.0 | 180 | 0.4371 | 0.3192 | 0.4995 | 0.9548 | nan | 0.0029 | 0.9961 | 0.0 | 0.0027 | 0.9548 |
| 0.4767 | 95.0 | 190 | 0.4325 | 0.3193 | 0.4997 | 0.9552 | nan | 0.0029 | 0.9965 | 0.0 | 0.0027 | 0.9552 |
| 0.4692 | 100.0 | 200 | 0.4272 | 0.3193 | 0.4997 | 0.9556 | nan | 0.0024 | 0.9970 | 0.0 | 0.0023 | 0.9556 |
| 0.4632 | 105.0 | 210 | 0.4251 | 0.3193 | 0.4996 | 0.9556 | nan | 0.0023 | 0.9969 | 0.0 | 0.0023 | 0.9556 |
| 0.4626 | 110.0 | 220 | 0.4236 | 0.3193 | 0.4997 | 0.9556 | nan | 0.0024 | 0.9970 | 0.0 | 0.0024 | 0.9556 |
| 0.4837 | 115.0 | 230 | 0.4216 | 0.3194 | 0.4998 | 0.9558 | nan | 0.0023 | 0.9972 | 0.0 | 0.0023 | 0.9558 |
| 0.4809 | 120.0 | 240 | 0.4198 | 0.3194 | 0.4998 | 0.9558 | nan | 0.0023 | 0.9972 | 0.0 | 0.0022 | 0.9558 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
alitolga/electra-base-generator-rank8 | alitolga | 2024-02-12T13:41:55Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:google/electra-base-generator",
"base_model:finetune:google/electra-base-generator",
"license:apache-2.0",
"region:us"
] | null | 2024-02-12T13:41:17Z | ---
license: apache-2.0
base_model: google/electra-base-generator
tags:
- generated_from_trainer
model-index:
- name: electra-base-generator-rank8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-base-generator-rank8
This model is a fine-tuned version of [google/electra-base-generator](https://huggingface.co/google/electra-base-generator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2562
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 8.2296 | 1.0 | 179 | 3.8171 |
| 3.6406 | 2.0 | 358 | 3.3218 |
| 3.395 | 3.0 | 537 | 3.2562 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
yemen2016/MeMo-BERT-WSD_old | yemen2016 | 2024-02-12T13:40:04Z | 48 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"da",
"base_model:MiMe-MeMo/MeMo-BERT-01",
"base_model:finetune:MiMe-MeMo/MeMo-BERT-01",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-09T12:37:02Z | ---
base_model: MiMe-MeMo/MeMo-BERT-01
tags:
- generated_from_trainer
model-index:
- name: new_memo_model
results: []
language: da # <-- my language
widget:
- text: "Men havde Gud vendt sig fra ham , saa kunde han ogsaa vende sig fra Gud . Havde Gud ingen Øren , saa havde han heller ingen Læber , havde Gud ingen Naade , saa havde han heller ingen Tilbedelse , og han trodsede og viste Gud ud af sit Hjærte ."
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MeMo Model (Word Sense Disambiguation)
This model is a fine-tuned version of [MiMe-MeMo/MeMo-BERT-01](https://huggingface.co/MiMe-MeMo/MeMo-BERT-01) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7214
- F1-score: 0.6667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1-score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 11 | 0.7214 | 0.6667 |
| No log | 2.0 | 22 | 1.2543 | 0.5429 |
| No log | 3.0 | 33 | 1.0829 | 0.6837 |
| No log | 4.0 | 44 | 1.3815 | 0.7552 |
| No log | 5.0 | 55 | 1.4733 | 0.7005 |
| No log | 6.0 | 66 | 2.3876 | 0.5513 |
| No log | 7.0 | 77 | 1.3215 | 0.8004 |
| No log | 8.0 | 88 | 1.4006 | 0.7608 |
| No log | 9.0 | 99 | 1.4862 | 0.7608 |
| No log | 10.0 | 110 | 1.4974 | 0.7608 |
| No log | 11.0 | 121 | 1.4966 | 0.7608 |
| No log | 12.0 | 132 | 1.5040 | 0.7608 |
| No log | 13.0 | 143 | 1.5010 | 0.7608 |
| No log | 14.0 | 154 | 1.4741 | 0.7608 |
| No log | 15.0 | 165 | 1.4507 | 0.7608 |
| No log | 16.0 | 176 | 1.4420 | 0.7608 |
| No log | 17.0 | 187 | 1.4398 | 0.7608 |
| No log | 18.0 | 198 | 1.4426 | 0.7608 |
| No log | 19.0 | 209 | 1.4438 | 0.7608 |
| No log | 20.0 | 220 | 1.4439 | 0.7608 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
|
alitolga/electra-base-generator-rank4 | alitolga | 2024-02-12T13:36:31Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:google/electra-base-generator",
"base_model:finetune:google/electra-base-generator",
"license:apache-2.0",
"region:us"
] | null | 2024-02-12T13:35:29Z | ---
license: apache-2.0
base_model: google/electra-base-generator
tags:
- generated_from_trainer
model-index:
- name: electra-base-generator-rank4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-base-generator-rank4
This model is a fine-tuned version of [google/electra-base-generator](https://huggingface.co/google/electra-base-generator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2603
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 8.3543 | 1.0 | 179 | 3.9048 |
| 3.7115 | 2.0 | 358 | 3.3385 |
| 3.4042 | 3.0 | 537 | 3.2603 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
ylacombe/musicgen-melody-bella-ciao | ylacombe | 2024-02-12T13:32:46Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"musicgen_melody_decoder",
"text-generation",
"text-to-audio",
"ylacombe/bella_ciao",
"generated_from_trainer",
"base_model:ylacombe/musicgen-melody",
"base_model:finetune:ylacombe/musicgen-melody",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2024-02-08T19:48:18Z | ---
base_model: ylacombe/musicgen-melody
tags:
- text-to-audio
- ylacombe/bella_ciao
- generated_from_trainer
model-index:
- name: musicgen-melody-bella-ciao
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# musicgen-melody-bella-ciao
This model is a fine-tuned version of [ylacombe/musicgen-melody](https://huggingface.co/ylacombe/musicgen-melody) on the YLACOMBE/BELLA_CIAO - DEFAULT dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 1
- eval_batch_size: 8
- seed: 456
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
alitolga/electra-base-generator-rank2 | alitolga | 2024-02-12T13:31:54Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:google/electra-base-generator",
"base_model:finetune:google/electra-base-generator",
"license:apache-2.0",
"region:us"
] | null | 2024-02-12T13:25:46Z | ---
license: apache-2.0
base_model: google/electra-base-generator
tags:
- generated_from_trainer
model-index:
- name: electra-base-generator-rank2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-base-generator-rank2
This model is a fine-tuned version of [google/electra-base-generator](https://huggingface.co/google/electra-base-generator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2155
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 8.206 | 1.0 | 179 | 3.8146 |
| 3.5779 | 2.0 | 358 | 3.2736 |
| 3.3568 | 3.0 | 537 | 3.2155 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
eren23/sd15-FantasyMix-filmGrain-segmoe | eren23 | 2024-02-12T13:31:44Z | 30 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"segmoe",
"merge",
"moe",
"sd1.5",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-02-12T13:17:43Z | ---
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- segmoe
- merge
- moe
- sd1.5
---
This model is a segmoe merge of 2 models from civitAI:
https://civitai.com/models/234898/vixons-fantasy-mix
https://civitai.com/models/43977?modelVersionId=113623
Merged using the great project at: https://github.com/segmind/segmoe
To do something similar you can either follow the guide in readme or you can follow this blogpost: https://huggingface.co/blog/segmoe
The setting I used:
base_model: https://civitai.com/api/download/models/306781
num_experts: 4
moe_layers: all
num_experts_per_tok: 2
type: sd
experts:
- source_model: https://civitai.com/api/download/models/306781
positive_prompt: "cinematic, portrait, photograph, instagram, fashion, movie, macro shot, 8K, RAW, fantastic, ultra high quality"
negative_prompt: " (deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime), text, cropped, out of frame, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck"
- source_model: https://civitai.com/api/download/models/113623
positive_prompt: "photo realistic scenes, fantastic view, impressive view, movie scene, 8K, RAW, hyperrealistic, ultra realistic"
negative_prompt: "simple background, duplicate, retro style, low quality, lowest quality, 1980s, 1990s, 2000s, 2005 2006 2007 2008 2009 2010 2011 2012 2013, bad anatomy, bad proportions, extra digits, lowres, username, artist name, error, duplicate, watermark, signature, text, extra digit, fewer digits, worst quality, jpeg artifacts, blurry"
# Useage
!pip install -U segmoe diffusers transformers
from segmoe import SegMoEPipeline
pipeline = SegMoEPipeline("eren23/sd15-FantasyMix-filmGrain-segmoe", device="cuda")
prompt = "fantastic land canvas, knight cat standing next to a purple medieval village wall"
negative_prompt = "nsfw, bad quality, worse quality"
img = pipeline(
prompt=prompt,
negative_prompt=negative_prompt,
height=512,
width=512,
num_inference_steps=30,
guidance_scale=7.5,
).images[0]
img.save("image.png") |
Annikaijak/bert_classification | Annikaijak | 2024-02-12T13:31:36Z | 93 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-12T13:31:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hweemiin/ppo-LunarLander-v2 | hweemiin | 2024-02-12T13:31:35Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-12T13:31:13Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 214.81 +/- 68.51
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
anupk/akmixtral-v1 | anupk | 2024-02-12T13:29:47Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-02-12T13:24:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
HamidBekam/bert_classification_v1 | HamidBekam | 2024-02-12T13:28:36Z | 95 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-12T13:27:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MikkelONielsen/bert_classification | MikkelONielsen | 2024-02-12T13:28:09Z | 94 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-12T13:27:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Camillahannesbo/Camillas_bert_model | Camillahannesbo | 2024-02-12T13:27:32Z | 94 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-12T13:26:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
alitolga/deberta-v3-base-rank64 | alitolga | 2024-02-12T13:23:13Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"region:us"
] | null | 2024-02-12T13:13:11Z | ---
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
model-index:
- name: deberta-v3-base-rank64
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-base-rank64
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8756
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 13.9289 | 1.0 | 179 | 8.8618 |
| 7.5578 | 2.0 | 358 | 5.4690 |
| 5.614 | 3.0 | 537 | 4.8756 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
benj3037/bert_test | benj3037 | 2024-02-12T13:22:57Z | 92 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-07T11:05:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
chathuranga-jayanath/codet5-small-v26 | chathuranga-jayanath | 2024-02-12T13:14:33Z | 98 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:Salesforce/codet5-small",
"base_model:finetune:Salesforce/codet5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-12T09:40:31Z | ---
license: apache-2.0
base_model: Salesforce/codet5-small
tags:
- generated_from_trainer
model-index:
- name: codet5-small-v26
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codet5-small-v26
This model is a fine-tuned version of [Salesforce/codet5-small](https://huggingface.co/Salesforce/codet5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1762
- Bleu Score: 0.0007
- Gen Len: 14.3657
## Model description
Trained,
- on: chathuranga-jayanath/context-5-finmath-times4j-html-mavendoxia-wro4j-guava-supercsv-balanced-10k-prompt-1
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 30
- eval_batch_size: 30
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu Score | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:----------:|:-------:|
| 0.2776 | 1.0 | 3407 | 0.2137 | 0.0007 | 14.2809 |
| 0.2216 | 2.0 | 6814 | 0.1836 | 0.0007 | 14.3813 |
| 0.2036 | 3.0 | 10221 | 0.1762 | 0.0007 | 14.3657 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
|
y-oguchi/codeparrot-ds | y-oguchi | 2024-02-12T13:05:37Z | 93 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-12T10:39:30Z | ---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 96
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 768
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
|
sam1120/dropoff-utcustom-train-SF-RGBD-b0_7 | sam1120 | 2024-02-12T13:01:30Z | 147 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2024-02-12T12:53:02Z | ---
license: other
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: dropoff-utcustom-train-SF-RGBD-b0_7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dropoff-utcustom-train-SF-RGBD-b0_7
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the sam1120/dropoff-utcustom-TRAIN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2075
- Mean Iou: 0.6372
- Mean Accuracy: 0.6861
- Overall Accuracy: 0.9647
- Accuracy Unlabeled: nan
- Accuracy Dropoff: 0.3822
- Accuracy Undropoff: 0.9900
- Iou Unlabeled: nan
- Iou Dropoff: 0.3104
- Iou Undropoff: 0.9641
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 120
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Dropoff | Accuracy Undropoff | Iou Unlabeled | Iou Dropoff | Iou Undropoff |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:----------------:|:------------------:|:-------------:|:-----------:|:-------------:|
| 0.9508 | 5.0 | 10 | 1.0263 | 0.3104 | 0.5474 | 0.8717 | nan | 0.1937 | 0.9011 | 0.0 | 0.0605 | 0.8706 |
| 0.7814 | 10.0 | 20 | 0.7568 | 0.4971 | 0.5339 | 0.9361 | nan | 0.0952 | 0.9726 | nan | 0.0584 | 0.9359 |
| 0.642 | 15.0 | 30 | 0.5907 | 0.5134 | 0.5443 | 0.9494 | nan | 0.1026 | 0.9861 | nan | 0.0777 | 0.9492 |
| 0.5118 | 20.0 | 40 | 0.4804 | 0.3658 | 0.5923 | 0.9513 | nan | 0.2006 | 0.9839 | 0.0 | 0.1464 | 0.9509 |
| 0.4581 | 25.0 | 50 | 0.4405 | 0.3715 | 0.5915 | 0.9569 | nan | 0.1930 | 0.9900 | 0.0 | 0.1578 | 0.9565 |
| 0.4213 | 30.0 | 60 | 0.4146 | 0.3828 | 0.6136 | 0.9580 | nan | 0.2379 | 0.9892 | 0.0 | 0.1910 | 0.9575 |
| 0.3571 | 35.0 | 70 | 0.3750 | 0.3846 | 0.6180 | 0.9578 | nan | 0.2474 | 0.9887 | 0.0 | 0.1963 | 0.9574 |
| 0.3205 | 40.0 | 80 | 0.3478 | 0.5777 | 0.6202 | 0.9576 | nan | 0.2522 | 0.9882 | nan | 0.1982 | 0.9571 |
| 0.3114 | 45.0 | 90 | 0.3461 | 0.3895 | 0.6423 | 0.9541 | nan | 0.3022 | 0.9824 | 0.0 | 0.2150 | 0.9535 |
| 0.2747 | 50.0 | 100 | 0.3253 | 0.5875 | 0.6357 | 0.9575 | nan | 0.2847 | 0.9867 | nan | 0.2180 | 0.9570 |
| 0.2593 | 55.0 | 110 | 0.3083 | 0.5967 | 0.6599 | 0.9552 | nan | 0.3377 | 0.9820 | nan | 0.2387 | 0.9546 |
| 0.2293 | 60.0 | 120 | 0.2762 | 0.5966 | 0.6389 | 0.9606 | nan | 0.2880 | 0.9898 | nan | 0.2331 | 0.9601 |
| 0.2306 | 65.0 | 130 | 0.2655 | 0.6016 | 0.6587 | 0.9577 | nan | 0.3326 | 0.9848 | nan | 0.2462 | 0.9571 |
| 0.2118 | 70.0 | 140 | 0.2446 | 0.6039 | 0.6509 | 0.9605 | nan | 0.3133 | 0.9886 | nan | 0.2479 | 0.9600 |
| 0.2038 | 75.0 | 150 | 0.2395 | 0.6164 | 0.6708 | 0.9607 | nan | 0.3547 | 0.9870 | nan | 0.2727 | 0.9601 |
| 0.1895 | 80.0 | 160 | 0.2196 | 0.6254 | 0.6721 | 0.9636 | nan | 0.3542 | 0.9900 | nan | 0.2878 | 0.9630 |
| 0.1681 | 85.0 | 170 | 0.2176 | 0.6302 | 0.6829 | 0.9630 | nan | 0.3773 | 0.9884 | nan | 0.2979 | 0.9624 |
| 0.1612 | 90.0 | 180 | 0.2175 | 0.6334 | 0.6870 | 0.9633 | nan | 0.3857 | 0.9884 | nan | 0.3042 | 0.9627 |
| 0.1545 | 95.0 | 190 | 0.2140 | 0.6337 | 0.6816 | 0.9644 | nan | 0.3732 | 0.9900 | nan | 0.3035 | 0.9638 |
| 0.1551 | 100.0 | 200 | 0.2134 | 0.6357 | 0.6891 | 0.9637 | nan | 0.3896 | 0.9886 | nan | 0.3083 | 0.9631 |
| 0.1508 | 105.0 | 210 | 0.2090 | 0.6359 | 0.6865 | 0.9642 | nan | 0.3837 | 0.9894 | nan | 0.3083 | 0.9636 |
| 0.1536 | 110.0 | 220 | 0.2057 | 0.6346 | 0.6801 | 0.9650 | nan | 0.3694 | 0.9908 | nan | 0.3048 | 0.9644 |
| 0.1392 | 115.0 | 230 | 0.2083 | 0.6387 | 0.6890 | 0.9646 | nan | 0.3883 | 0.9896 | nan | 0.3133 | 0.9640 |
| 0.1446 | 120.0 | 240 | 0.2075 | 0.6372 | 0.6861 | 0.9647 | nan | 0.3822 | 0.9900 | nan | 0.3104 | 0.9641 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
sam1120/dropoff-utcustom-train-SF-RGBD-b0_3 | sam1120 | 2024-02-12T13:01:18Z | 146 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2024-02-12T12:52:47Z | ---
license: other
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: dropoff-utcustom-train-SF-RGBD-b0_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dropoff-utcustom-train-SF-RGBD-b0_3
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the sam1120/dropoff-utcustom-TRAIN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3666
- Mean Iou: 0.6400
- Mean Accuracy: 0.7120
- Overall Accuracy: 0.9610
- Accuracy Unlabeled: nan
- Accuracy Dropoff: 0.4404
- Accuracy Undropoff: 0.9836
- Iou Unlabeled: nan
- Iou Dropoff: 0.3196
- Iou Undropoff: 0.9603
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 120
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Dropoff | Accuracy Undropoff | Iou Unlabeled | Iou Dropoff | Iou Undropoff |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:----------------:|:------------------:|:-------------:|:-----------:|:-------------:|
| 1.0352 | 5.0 | 10 | 1.0676 | 0.2560 | 0.5776 | 0.7142 | nan | 0.4286 | 0.7266 | 0.0 | 0.0589 | 0.7090 |
| 0.9564 | 10.0 | 20 | 0.9743 | 0.3355 | 0.5576 | 0.9248 | nan | 0.1571 | 0.9581 | 0.0 | 0.0822 | 0.9243 |
| 0.8577 | 15.0 | 30 | 0.8504 | 0.3318 | 0.5283 | 0.9409 | nan | 0.0782 | 0.9784 | 0.0 | 0.0545 | 0.9407 |
| 0.7512 | 20.0 | 40 | 0.6972 | 0.3270 | 0.5122 | 0.9527 | nan | 0.0318 | 0.9926 | 0.0 | 0.0283 | 0.9526 |
| 0.6955 | 25.0 | 50 | 0.5761 | 0.3259 | 0.5099 | 0.9545 | nan | 0.0250 | 0.9948 | 0.0 | 0.0234 | 0.9544 |
| 0.6691 | 30.0 | 60 | 0.5209 | 0.3360 | 0.5271 | 0.9525 | nan | 0.0632 | 0.9911 | 0.0 | 0.0557 | 0.9524 |
| 0.626 | 35.0 | 70 | 0.5297 | 0.3408 | 0.5362 | 0.9505 | nan | 0.0844 | 0.9881 | 0.0 | 0.0719 | 0.9503 |
| 0.5544 | 40.0 | 80 | 0.5263 | 0.3616 | 0.5757 | 0.9521 | nan | 0.1652 | 0.9862 | 0.0 | 0.1330 | 0.9518 |
| 0.5316 | 45.0 | 90 | 0.4825 | 0.3836 | 0.6353 | 0.9506 | nan | 0.2915 | 0.9792 | 0.0 | 0.2009 | 0.9500 |
| 0.4929 | 50.0 | 100 | 0.4763 | 0.3958 | 0.6588 | 0.9530 | nan | 0.3378 | 0.9797 | 0.0 | 0.2352 | 0.9524 |
| 0.468 | 55.0 | 110 | 0.4583 | 0.4077 | 0.6974 | 0.9528 | nan | 0.4188 | 0.9759 | 0.0 | 0.2713 | 0.9519 |
| 0.429 | 60.0 | 120 | 0.4268 | 0.3985 | 0.6526 | 0.9575 | nan | 0.3199 | 0.9852 | 0.0 | 0.2386 | 0.9569 |
| 0.4211 | 65.0 | 130 | 0.3988 | 0.3951 | 0.6406 | 0.9584 | nan | 0.2939 | 0.9872 | 0.0 | 0.2275 | 0.9578 |
| 0.3926 | 70.0 | 140 | 0.4085 | 0.4102 | 0.6780 | 0.9587 | nan | 0.3718 | 0.9842 | 0.0 | 0.2726 | 0.9581 |
| 0.4006 | 75.0 | 150 | 0.3944 | 0.6077 | 0.6574 | 0.9604 | nan | 0.3269 | 0.9879 | nan | 0.2555 | 0.9599 |
| 0.3978 | 80.0 | 160 | 0.3881 | 0.6216 | 0.6875 | 0.9591 | nan | 0.3912 | 0.9838 | nan | 0.2848 | 0.9585 |
| 0.3553 | 85.0 | 170 | 0.3877 | 0.6333 | 0.7077 | 0.9595 | nan | 0.4329 | 0.9824 | nan | 0.3079 | 0.9588 |
| 0.3637 | 90.0 | 180 | 0.4004 | 0.6428 | 0.7273 | 0.9594 | nan | 0.4741 | 0.9805 | nan | 0.3270 | 0.9586 |
| 0.3416 | 95.0 | 190 | 0.3835 | 0.6403 | 0.7166 | 0.9604 | nan | 0.4507 | 0.9825 | nan | 0.3210 | 0.9596 |
| 0.342 | 100.0 | 200 | 0.3634 | 0.6371 | 0.7061 | 0.9611 | nan | 0.4279 | 0.9842 | nan | 0.3137 | 0.9604 |
| 0.3393 | 105.0 | 210 | 0.3740 | 0.6429 | 0.7217 | 0.9604 | nan | 0.4614 | 0.9820 | nan | 0.3262 | 0.9596 |
| 0.3535 | 110.0 | 220 | 0.3771 | 0.6423 | 0.7199 | 0.9605 | nan | 0.4575 | 0.9823 | nan | 0.3249 | 0.9597 |
| 0.3159 | 115.0 | 230 | 0.3710 | 0.6423 | 0.7167 | 0.9610 | nan | 0.4502 | 0.9832 | nan | 0.3243 | 0.9603 |
| 0.3278 | 120.0 | 240 | 0.3666 | 0.6400 | 0.7120 | 0.9610 | nan | 0.4404 | 0.9836 | nan | 0.3196 | 0.9603 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
sam1120/dropoff-utcustom-train-SF-RGBD-b0_1 | sam1120 | 2024-02-12T13:01:12Z | 145 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2024-02-12T12:52:21Z | ---
license: other
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: dropoff-utcustom-train-SF-RGBD-b0_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dropoff-utcustom-train-SF-RGBD-b0_1
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the sam1120/dropoff-utcustom-TRAIN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4979
- Mean Iou: 0.4170
- Mean Accuracy: 0.6846
- Overall Accuracy: 0.9603
- Accuracy Unlabeled: nan
- Accuracy Dropoff: 0.3839
- Accuracy Undropoff: 0.9853
- Iou Unlabeled: 0.0
- Iou Dropoff: 0.2914
- Iou Undropoff: 0.9597
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 120
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Dropoff | Accuracy Undropoff | Iou Unlabeled | Iou Dropoff | Iou Undropoff |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:----------------:|:------------------:|:-------------:|:-----------:|:-------------:|
| 1.0495 | 5.0 | 10 | 1.0890 | 0.1852 | 0.3572 | 0.4990 | nan | 0.2026 | 0.5119 | 0.0 | 0.0474 | 0.5081 |
| 0.9941 | 10.0 | 20 | 1.0479 | 0.3452 | 0.8357 | 0.8479 | nan | 0.8225 | 0.8490 | 0.0 | 0.1931 | 0.8425 |
| 0.9448 | 15.0 | 30 | 0.9839 | 0.3790 | 0.8217 | 0.9010 | nan | 0.7351 | 0.9082 | 0.0 | 0.2390 | 0.8980 |
| 0.8912 | 20.0 | 40 | 0.9041 | 0.3845 | 0.7150 | 0.9247 | nan | 0.4863 | 0.9437 | 0.0 | 0.2303 | 0.9233 |
| 0.8458 | 25.0 | 50 | 0.7997 | 0.3835 | 0.6687 | 0.9326 | nan | 0.3808 | 0.9565 | 0.0 | 0.2188 | 0.9316 |
| 0.8299 | 30.0 | 60 | 0.7387 | 0.3751 | 0.6333 | 0.9326 | nan | 0.3068 | 0.9597 | 0.0 | 0.1934 | 0.9318 |
| 0.7518 | 35.0 | 70 | 0.6810 | 0.3791 | 0.6322 | 0.9404 | nan | 0.2961 | 0.9683 | 0.0 | 0.1975 | 0.9397 |
| 0.6943 | 40.0 | 80 | 0.6322 | 0.3703 | 0.6069 | 0.9422 | nan | 0.2411 | 0.9726 | 0.0 | 0.1691 | 0.9417 |
| 0.6617 | 45.0 | 90 | 0.6071 | 0.3780 | 0.6240 | 0.9454 | nan | 0.2734 | 0.9746 | 0.0 | 0.1892 | 0.9449 |
| 0.634 | 50.0 | 100 | 0.5932 | 0.3765 | 0.6106 | 0.9497 | nan | 0.2407 | 0.9805 | 0.0 | 0.1802 | 0.9494 |
| 0.6157 | 55.0 | 110 | 0.5829 | 0.3982 | 0.6538 | 0.9524 | nan | 0.3281 | 0.9795 | 0.0 | 0.2425 | 0.9520 |
| 0.5814 | 60.0 | 120 | 0.5708 | 0.4038 | 0.6699 | 0.9533 | nan | 0.3608 | 0.9790 | 0.0 | 0.2586 | 0.9528 |
| 0.5988 | 65.0 | 130 | 0.5575 | 0.3974 | 0.6456 | 0.9569 | nan | 0.3061 | 0.9851 | 0.0 | 0.2357 | 0.9564 |
| 0.5583 | 70.0 | 140 | 0.5530 | 0.4224 | 0.7075 | 0.9576 | nan | 0.4346 | 0.9803 | 0.0 | 0.3103 | 0.9570 |
| 0.5596 | 75.0 | 150 | 0.5264 | 0.4034 | 0.6522 | 0.9598 | nan | 0.3167 | 0.9877 | 0.0 | 0.2510 | 0.9593 |
| 0.5524 | 80.0 | 160 | 0.5392 | 0.4208 | 0.7109 | 0.9567 | nan | 0.4429 | 0.9790 | 0.0 | 0.3065 | 0.9560 |
| 0.5294 | 85.0 | 170 | 0.5257 | 0.4161 | 0.6913 | 0.9582 | nan | 0.4002 | 0.9824 | 0.0 | 0.2909 | 0.9576 |
| 0.5477 | 90.0 | 180 | 0.5178 | 0.4207 | 0.6962 | 0.9591 | nan | 0.4095 | 0.9829 | 0.0 | 0.3035 | 0.9584 |
| 0.528 | 95.0 | 190 | 0.5185 | 0.4183 | 0.6939 | 0.9590 | nan | 0.4047 | 0.9831 | 0.0 | 0.2965 | 0.9584 |
| 0.5144 | 100.0 | 200 | 0.5004 | 0.4153 | 0.6788 | 0.9604 | nan | 0.3716 | 0.9860 | 0.0 | 0.2859 | 0.9599 |
| 0.5313 | 105.0 | 210 | 0.5032 | 0.4199 | 0.7005 | 0.9585 | nan | 0.4191 | 0.9819 | 0.0 | 0.3020 | 0.9578 |
| 0.5172 | 110.0 | 220 | 0.4993 | 0.4188 | 0.6931 | 0.9591 | nan | 0.4030 | 0.9832 | 0.0 | 0.2978 | 0.9585 |
| 0.5124 | 115.0 | 230 | 0.4999 | 0.4167 | 0.6828 | 0.9606 | nan | 0.3799 | 0.9858 | 0.0 | 0.2901 | 0.9600 |
| 0.5025 | 120.0 | 240 | 0.4979 | 0.4170 | 0.6846 | 0.9603 | nan | 0.3839 | 0.9853 | 0.0 | 0.2914 | 0.9597 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
sam1120/dropoff-utcustom-train-SF-RGBD-b0_2 | sam1120 | 2024-02-12T13:01:09Z | 145 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2024-02-12T12:52:41Z | ---
license: other
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: dropoff-utcustom-train-SF-RGBD-b0_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dropoff-utcustom-train-SF-RGBD-b0_2
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the sam1120/dropoff-utcustom-TRAIN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4274
- Mean Iou: 0.6102
- Mean Accuracy: 0.6603
- Overall Accuracy: 0.9607
- Accuracy Unlabeled: nan
- Accuracy Dropoff: 0.3326
- Accuracy Undropoff: 0.9879
- Iou Unlabeled: nan
- Iou Dropoff: 0.2602
- Iou Undropoff: 0.9601
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 120
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Dropoff | Accuracy Undropoff | Iou Unlabeled | Iou Dropoff | Iou Undropoff |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:----------------:|:------------------:|:-------------:|:-----------:|:-------------:|
| 1.0555 | 5.0 | 10 | 1.0734 | 0.2254 | 0.4211 | 0.6018 | nan | 0.2240 | 0.6182 | 0.0 | 0.0622 | 0.6140 |
| 0.9825 | 10.0 | 20 | 1.0261 | 0.2992 | 0.6380 | 0.7780 | nan | 0.4852 | 0.7907 | 0.0 | 0.1170 | 0.7807 |
| 0.8991 | 15.0 | 30 | 0.8985 | 0.3231 | 0.5517 | 0.8892 | nan | 0.1836 | 0.9198 | 0.0 | 0.0776 | 0.8917 |
| 0.8191 | 20.0 | 40 | 0.7413 | 0.3270 | 0.5262 | 0.9299 | nan | 0.0858 | 0.9665 | 0.0 | 0.0513 | 0.9296 |
| 0.7562 | 25.0 | 50 | 0.6268 | 0.3259 | 0.5130 | 0.9436 | nan | 0.0433 | 0.9826 | 0.0 | 0.0343 | 0.9435 |
| 0.7395 | 30.0 | 60 | 0.5872 | 0.3235 | 0.5073 | 0.9498 | nan | 0.0246 | 0.9900 | 0.0 | 0.0206 | 0.9498 |
| 0.7272 | 35.0 | 70 | 0.5820 | 0.3379 | 0.5415 | 0.9411 | nan | 0.1055 | 0.9774 | 0.0 | 0.0729 | 0.9409 |
| 0.6525 | 40.0 | 80 | 0.5571 | 0.3445 | 0.5451 | 0.9498 | nan | 0.1036 | 0.9865 | 0.0 | 0.0839 | 0.9496 |
| 0.6161 | 45.0 | 90 | 0.5465 | 0.3480 | 0.5480 | 0.9528 | nan | 0.1064 | 0.9895 | 0.0 | 0.0914 | 0.9526 |
| 0.6131 | 50.0 | 100 | 0.5379 | 0.3712 | 0.5917 | 0.9555 | nan | 0.1949 | 0.9885 | 0.0 | 0.1584 | 0.9551 |
| 0.579 | 55.0 | 110 | 0.5229 | 0.3892 | 0.6411 | 0.9536 | nan | 0.3002 | 0.9819 | 0.0 | 0.2146 | 0.9530 |
| 0.5133 | 60.0 | 120 | 0.5113 | 0.3962 | 0.6596 | 0.9541 | nan | 0.3384 | 0.9808 | 0.0 | 0.2352 | 0.9535 |
| 0.535 | 65.0 | 130 | 0.4925 | 0.3981 | 0.6566 | 0.9561 | nan | 0.3299 | 0.9833 | 0.0 | 0.2386 | 0.9555 |
| 0.4866 | 70.0 | 140 | 0.4717 | 0.5993 | 0.6516 | 0.9584 | nan | 0.3169 | 0.9863 | nan | 0.2407 | 0.9579 |
| 0.5119 | 75.0 | 150 | 0.4712 | 0.5976 | 0.6513 | 0.9578 | nan | 0.3171 | 0.9856 | nan | 0.2380 | 0.9572 |
| 0.5034 | 80.0 | 160 | 0.4737 | 0.6120 | 0.6840 | 0.9562 | nan | 0.3872 | 0.9808 | nan | 0.2686 | 0.9554 |
| 0.4503 | 85.0 | 170 | 0.4496 | 0.6103 | 0.6618 | 0.9604 | nan | 0.3361 | 0.9875 | nan | 0.2607 | 0.9598 |
| 0.4653 | 90.0 | 180 | 0.4617 | 0.6201 | 0.6907 | 0.9580 | nan | 0.3992 | 0.9822 | nan | 0.2830 | 0.9572 |
| 0.4375 | 95.0 | 190 | 0.4412 | 0.6090 | 0.6592 | 0.9605 | nan | 0.3305 | 0.9878 | nan | 0.2580 | 0.9599 |
| 0.4306 | 100.0 | 200 | 0.4355 | 0.6120 | 0.6653 | 0.9602 | nan | 0.3436 | 0.9870 | nan | 0.2643 | 0.9597 |
| 0.4456 | 105.0 | 210 | 0.4414 | 0.6178 | 0.6756 | 0.9601 | nan | 0.3653 | 0.9860 | nan | 0.2760 | 0.9595 |
| 0.4435 | 110.0 | 220 | 0.4387 | 0.6150 | 0.6681 | 0.9608 | nan | 0.3489 | 0.9873 | nan | 0.2699 | 0.9602 |
| 0.4263 | 115.0 | 230 | 0.4348 | 0.6156 | 0.6692 | 0.9607 | nan | 0.3512 | 0.9872 | nan | 0.2711 | 0.9602 |
| 0.4123 | 120.0 | 240 | 0.4274 | 0.6102 | 0.6603 | 0.9607 | nan | 0.3326 | 0.9879 | nan | 0.2602 | 0.9601 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
sam1120/dropoff-utcustom-train-SF-RGBD-b0_4 | sam1120 | 2024-02-12T13:01:07Z | 145 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2024-02-12T12:52:49Z | ---
license: other
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: dropoff-utcustom-train-SF-RGBD-b0_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dropoff-utcustom-train-SF-RGBD-b0_4
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the sam1120/dropoff-utcustom-TRAIN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3688
- Mean Iou: 0.3485
- Mean Accuracy: 0.5433
- Overall Accuracy: 0.9606
- Accuracy Unlabeled: nan
- Accuracy Dropoff: 0.0881
- Accuracy Undropoff: 0.9984
- Iou Unlabeled: 0.0
- Iou Dropoff: 0.0851
- Iou Undropoff: 0.9604
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 120
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Dropoff | Accuracy Undropoff | Iou Unlabeled | Iou Dropoff | Iou Undropoff |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:----------------:|:------------------:|:-------------:|:-----------:|:-------------:|
| 1.2008 | 5.0 | 10 | 1.0960 | 0.1205 | 0.4461 | 0.2825 | nan | 0.6246 | 0.2677 | 0.0 | 0.0943 | 0.2671 |
| 1.0485 | 10.0 | 20 | 1.0952 | 0.1603 | 0.6272 | 0.4049 | nan | 0.8696 | 0.3848 | 0.0 | 0.0965 | 0.3843 |
| 0.9156 | 15.0 | 30 | 1.0312 | 0.3080 | 0.5963 | 0.8333 | nan | 0.3377 | 0.8548 | 0.0 | 0.0924 | 0.8317 |
| 0.7435 | 20.0 | 40 | 0.9448 | 0.3221 | 0.5508 | 0.8937 | nan | 0.1769 | 0.9248 | 0.0 | 0.0733 | 0.8930 |
| 0.7336 | 25.0 | 50 | 0.7446 | 0.3191 | 0.4998 | 0.9461 | nan | 0.0129 | 0.9866 | 0.0 | 0.0113 | 0.9461 |
| 0.6585 | 30.0 | 60 | 0.6397 | 0.3183 | 0.4981 | 0.9534 | nan | 0.0014 | 0.9948 | 0.0 | 0.0013 | 0.9534 |
| 0.583 | 35.0 | 70 | 0.5785 | 0.3181 | 0.4978 | 0.9537 | nan | 0.0006 | 0.9951 | 0.0 | 0.0005 | 0.9537 |
| 0.5324 | 40.0 | 80 | 0.5458 | 0.3182 | 0.4980 | 0.9545 | nan | 0.0002 | 0.9958 | 0.0 | 0.0002 | 0.9545 |
| 0.5155 | 45.0 | 90 | 0.5347 | 0.3186 | 0.4987 | 0.9558 | nan | 0.0001 | 0.9973 | 0.0 | 0.0001 | 0.9558 |
| 0.4874 | 50.0 | 100 | 0.4954 | 0.3179 | 0.4976 | 0.9537 | nan | 0.0 | 0.9951 | 0.0 | 0.0 | 0.9537 |
| 0.4716 | 55.0 | 110 | 0.4646 | 0.3185 | 0.4985 | 0.9555 | nan | 0.0 | 0.9969 | 0.0 | 0.0 | 0.9555 |
| 0.4441 | 60.0 | 120 | 0.4426 | 0.3185 | 0.4985 | 0.9555 | nan | 0.0 | 0.9970 | 0.0 | 0.0 | 0.9555 |
| 0.4659 | 65.0 | 130 | 0.4345 | 0.3189 | 0.4991 | 0.9567 | nan | 0.0 | 0.9982 | 0.0 | 0.0 | 0.9567 |
| 0.4758 | 70.0 | 140 | 0.4221 | 0.3181 | 0.4978 | 0.9543 | nan | 0.0 | 0.9957 | 0.0 | 0.0 | 0.9543 |
| 0.4208 | 75.0 | 150 | 0.4029 | 0.3190 | 0.4993 | 0.9571 | nan | 0.0 | 0.9987 | 0.0 | 0.0 | 0.9571 |
| 0.4395 | 80.0 | 160 | 0.4170 | 0.3207 | 0.5016 | 0.9559 | nan | 0.0062 | 0.9971 | 0.0 | 0.0062 | 0.9559 |
| 0.3981 | 85.0 | 170 | 0.3992 | 0.3214 | 0.5027 | 0.9574 | nan | 0.0067 | 0.9987 | 0.0 | 0.0066 | 0.9574 |
| 0.3983 | 90.0 | 180 | 0.3965 | 0.3282 | 0.5125 | 0.9560 | nan | 0.0288 | 0.9963 | 0.0 | 0.0285 | 0.9560 |
| 0.398 | 95.0 | 190 | 0.3747 | 0.3272 | 0.5112 | 0.9569 | nan | 0.0251 | 0.9973 | 0.0 | 0.0249 | 0.9568 |
| 0.3767 | 100.0 | 200 | 0.3722 | 0.3301 | 0.5155 | 0.9574 | nan | 0.0336 | 0.9975 | 0.0 | 0.0330 | 0.9573 |
| 0.3797 | 105.0 | 210 | 0.3781 | 0.3334 | 0.5204 | 0.9583 | nan | 0.0429 | 0.9980 | 0.0 | 0.0420 | 0.9582 |
| 0.373 | 110.0 | 220 | 0.3744 | 0.3409 | 0.5317 | 0.9593 | nan | 0.0654 | 0.9980 | 0.0 | 0.0636 | 0.9591 |
| 0.372 | 115.0 | 230 | 0.3700 | 0.3440 | 0.5364 | 0.9599 | nan | 0.0746 | 0.9983 | 0.0 | 0.0723 | 0.9598 |
| 0.3629 | 120.0 | 240 | 0.3688 | 0.3485 | 0.5433 | 0.9606 | nan | 0.0881 | 0.9984 | 0.0 | 0.0851 | 0.9604 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
pgajo/mbert_EW-TT-PE_U1_S0_DROP1_mbert_E10_DEV98.0 | pgajo | 2024-02-12T13:00:15Z | 93 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-02-12T12:59:25Z | ---
{}
---
Model description:
Model: bert-base-multilingual-cased
Dataset: TASTEset
Unshuffled ratio: ['1']
Shuffled ratio: ['0']
Best exact match epoch: 10
Best exact match: 97.79
Best epoch: 10
Drop duplicates: ['1']
Max epochs = 10
Optimizer lr = 3e-05
Optimizer eps = 1e-08
Batch size = 32
Dataset path = pgajo/EW-TT-PE_U1_S0_DROP1_mbert
Results
| epoch | train_loss | train_f1 | train_exact | dev_loss | dev_f1 | dev_exact | test_loss | test_f1 | test_exact |
|--------:|-------------:|-----------:|--------------:|-----------:|---------:|------------:|------------:|----------:|-------------:|
| 1 | 2.94 | 17.7 | 10.02 | 0.76 | 69.4 | 58.56 | 0 | 0 | 0 |
| 2 | 0.38 | 86.34 | 80.72 | 0.15 | 95.66 | 91.99 | 0 | 0 | 0 |
| 3 | 0.08 | 97.33 | 95.44 | 0.08 | 98.38 | 95.86 | 0 | 0 | 0 |
| 4 | 0.04 | 98.98 | 98.27 | 0.09 | 98.09 | 96.41 | 0 | 0 | 0 |
| 5 | 0.03 | 98.94 | 98.41 | 0.08 | 98.44 | 96.41 | 0 | 0 | 0 |
| 6 | 0.02 | 99.32 | 98.76 | 0.08 | 98.57 | 97.24 | 0 | 0 | 0 |
| 7 | 0.02 | 99.44 | 99.24 | 0.05 | 98.44 | 97.51 | 0 | 0 | 0 |
| 8 | 0.01 | 99.82 | 99.59 | 0.07 | 98.47 | 97.24 | 0 | 0 | 0 |
| 9 | 0.01 | 99.8 | 99.65 | 0.07 | 98.66 | 97.24 | 0 | 0 | 0 |
| 10 | 0.01 | 99.82 | 99.65 | 0.06 | 98.59 | 97.79 | 0 | 0 | 0 | |
QMMMS/ppo-LunarLander-v2 | QMMMS | 2024-02-12T12:58:15Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-12T12:57:58Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 267.49 +/- 21.18
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
noza-kit/Adapter_llama2_translate_Q_enpt_ex3-3epoch | noza-kit | 2024-02-12T12:56:31Z | 0 | 0 | peft | [
"peft",
"safetensors",
"region:us"
] | null | 2024-02-12T12:30:57Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
|
arun100/whisper-small-fa-2 | arun100 | 2024-02-12T12:51:30Z | 63 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"fa",
"dataset:mozilla-foundation/common_voice_16_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-02-11T06:14:51Z | ---
language:
- fa
license: apache-2.0
base_model: openai/whisper-small
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_16_0
metrics:
- wer
model-index:
- name: Whisper Small Persian Iranian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_16_0 fa
type: mozilla-foundation/common_voice_16_0
config: fa
split: test
args: fa
metrics:
- name: Wer
type: wer
value: 39.72011741415796
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Persian Iranian
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_16_0 fa dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4858
- Wer: 39.7201
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.4531 | 1.03 | 500 | 0.6448 | 50.7393 |
| 0.4031 | 3.0 | 1000 | 0.5755 | 46.5001 |
| 0.2745 | 4.04 | 1500 | 0.5389 | 43.7190 |
| 0.336 | 6.0 | 2000 | 0.5166 | 42.4056 |
| 0.2429 | 7.04 | 2500 | 0.5045 | 41.1810 |
| 0.2852 | 9.01 | 3000 | 0.4941 | 40.6444 |
| 0.2217 | 10.04 | 3500 | 0.4888 | 40.1106 |
| 0.2384 | 12.01 | 4000 | 0.4873 | 39.9208 |
| 0.1889 | 13.04 | 4500 | 0.4858 | 39.7201 |
| 0.2202 | 15.01 | 5000 | 0.4888 | 39.7228 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.16.2.dev0
- Tokenizers 0.15.0
|
ambet/mistral_robot_lora | ambet | 2024-02-12T12:49:25Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-11T13:49:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
HannoRE/q-Taxi-v3 | HannoRE | 2024-02-12T12:47:56Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-12T12:47:50Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.79
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="HannoRE/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
FarisAzny/bloom-1b1-lora-tagger | FarisAzny | 2024-02-12T12:45:50Z | 1 | 0 | peft | [
"peft",
"safetensors",
"region:us"
] | null | 2023-07-12T15:56:57Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
HannoRE/q-FrozenLake-v1-4x4-noSlippery | HannoRE | 2024-02-12T12:43:41Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-12T12:43:37Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="HannoRE/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Mlxa/atd-distilbert | Mlxa | 2024-02-12T12:37:58Z | 93 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-11T19:58:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
alitolga/deberta-v3-base-rank2 | alitolga | 2024-02-12T12:34:17Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"region:us"
] | null | 2024-02-12T12:31:13Z | ---
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
model-index:
- name: deberta-v3-base-rank2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-base-rank2
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.5797
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 13.8439 | 1.0 | 180 | 9.7690 |
| 8.174 | 2.0 | 360 | 5.4265 |
| 5.4657 | 3.0 | 540 | 4.5797 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
IsaacMwesigwa/footballer-recognition-gray-nobg | IsaacMwesigwa | 2024-02-12T12:26:19Z | 197 | 0 | transformers | [
"transformers",
"safetensors",
"resnet",
"image-classification",
"autotrain",
"dataset:footballer-recognition-gray-nobg/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-02-12T12:26:15Z |
---
tags:
- autotrain
- image-classification
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
datasets:
- footballer-recognition-gray-nobg/autotrain-data
---
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 6.152068138122559
f1_macro: 0.002214415118096559
f1_micro: 0.012527101903155865
f1_weighted: 0.0022165489799304268
precision_macro: 0.0015895320987927826
precision_micro: 0.012527101903155867
precision_weighted: 0.0015910638088373914
recall_macro: 0.012515042117930204
recall_micro: 0.012527101903155867
recall_weighted: 0.012527101903155867
accuracy: 0.012527101903155867
|
doroshroman/finetuned_sd_v1_5 | doroshroman | 2024-02-12T12:25:55Z | 30 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-02-12T09:57:16Z | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- text-to-image
- dreambooth
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- dreambooth
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- dreambooth
- stable-diffusion
- stable-diffusion-diffusers
inference: true
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of guy raise money for army
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - doroshroman/finetuned_sd_v1_5
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of guy raise money for army using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
WesselvanGils/MentaLLaMA-chat-7b-GGUF-q8 | WesselvanGils | 2024-02-12T12:23:03Z | 1 | 0 | null | [
"gguf",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-02-12T12:13:33Z | ---
license: mit
---
# MentaLLaMA-chat-7b-GGUF-q8
This model is a GGUF version of the MentaLLama model found [here](https://huggingface.co/klyang/MentaLLaMA-chat-7B)
The process for converting the model has been documented [here](https://www.substratus.ai/blog/converting-hf-model-gguf-model/)
|
sam1120/dropoff-utcustom-train-SF-RGB-b0_5 | sam1120 | 2024-02-12T12:18:02Z | 145 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2024-02-12T12:10:28Z | ---
license: other
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: dropoff-utcustom-train-SF-RGB-b0_5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dropoff-utcustom-train-SF-RGB-b0_5
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the sam1120/dropoff-utcustom-TRAIN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2543
- Mean Iou: 0.6541
- Mean Accuracy: 0.6937
- Overall Accuracy: 0.9665
- Accuracy Unlabeled: nan
- Accuracy Dropoff: 0.3944
- Accuracy Undropoff: 0.9930
- Iou Unlabeled: nan
- Iou Dropoff: 0.3424
- Iou Undropoff: 0.9659
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 120
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Dropoff | Accuracy Undropoff | Iou Unlabeled | Iou Dropoff | Iou Undropoff |
|:-------------:|:------:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:----------------:|:------------------:|:-------------:|:-----------:|:-------------:|
| 1.2123 | 3.33 | 10 | 1.1206 | 0.0793 | 0.1898 | 0.1888 | nan | 0.1908 | 0.1887 | 0.0 | 0.0494 | 0.1886 |
| 1.0927 | 6.67 | 20 | 1.0985 | 0.2196 | 0.5875 | 0.5351 | nan | 0.6450 | 0.5300 | 0.0 | 0.1290 | 0.5298 |
| 1.0578 | 10.0 | 30 | 0.9786 | 0.3662 | 0.7562 | 0.8622 | nan | 0.6400 | 0.8725 | 0.0 | 0.2367 | 0.8621 |
| 0.788 | 13.33 | 40 | 0.7940 | 0.4289 | 0.7505 | 0.9456 | nan | 0.5365 | 0.9646 | 0.0 | 0.3398 | 0.9468 |
| 0.6353 | 16.67 | 50 | 0.6206 | 0.4182 | 0.6840 | 0.9583 | nan | 0.3830 | 0.9850 | 0.0 | 0.2966 | 0.9581 |
| 0.6944 | 20.0 | 60 | 0.5213 | 0.4211 | 0.6766 | 0.9623 | nan | 0.3631 | 0.9901 | 0.0 | 0.3014 | 0.9620 |
| 0.5046 | 23.33 | 70 | 0.4765 | 0.4239 | 0.6796 | 0.9634 | nan | 0.3683 | 0.9910 | 0.0 | 0.3090 | 0.9628 |
| 0.4684 | 26.67 | 80 | 0.4643 | 0.3982 | 0.6347 | 0.9598 | nan | 0.2779 | 0.9914 | 0.0 | 0.2352 | 0.9593 |
| 0.4401 | 30.0 | 90 | 0.4483 | 0.4110 | 0.6507 | 0.9632 | nan | 0.3077 | 0.9936 | 0.0 | 0.2703 | 0.9627 |
| 0.4268 | 33.33 | 100 | 0.4366 | 0.6489 | 0.7001 | 0.9638 | nan | 0.4108 | 0.9895 | nan | 0.3347 | 0.9632 |
| 0.3939 | 36.67 | 110 | 0.4027 | 0.4272 | 0.6798 | 0.9650 | nan | 0.3670 | 0.9927 | 0.0 | 0.3171 | 0.9644 |
| 0.4472 | 40.0 | 120 | 0.4159 | 0.6428 | 0.6896 | 0.9638 | nan | 0.3887 | 0.9905 | nan | 0.3225 | 0.9632 |
| 0.3618 | 43.33 | 130 | 0.3765 | 0.6325 | 0.6671 | 0.9650 | nan | 0.3402 | 0.9939 | nan | 0.3006 | 0.9644 |
| 0.3456 | 46.67 | 140 | 0.3671 | 0.6395 | 0.6816 | 0.9643 | nan | 0.3715 | 0.9917 | nan | 0.3153 | 0.9637 |
| 0.3352 | 50.0 | 150 | 0.3572 | 0.6431 | 0.6839 | 0.9650 | nan | 0.3755 | 0.9923 | nan | 0.3218 | 0.9644 |
| 0.3143 | 53.33 | 160 | 0.3451 | 0.6351 | 0.6702 | 0.9651 | nan | 0.3467 | 0.9938 | nan | 0.3056 | 0.9646 |
| 0.3009 | 56.67 | 170 | 0.3357 | 0.6449 | 0.6941 | 0.9636 | nan | 0.3984 | 0.9898 | nan | 0.3267 | 0.9630 |
| 0.2765 | 60.0 | 180 | 0.3188 | 0.6458 | 0.6934 | 0.9641 | nan | 0.3965 | 0.9903 | nan | 0.3282 | 0.9634 |
| 0.2703 | 63.33 | 190 | 0.3179 | 0.6385 | 0.6732 | 0.9656 | nan | 0.3525 | 0.9940 | nan | 0.3119 | 0.9650 |
| 0.2746 | 66.67 | 200 | 0.3067 | 0.6385 | 0.6702 | 0.9662 | nan | 0.3456 | 0.9949 | nan | 0.3113 | 0.9656 |
| 0.2516 | 70.0 | 210 | 0.2992 | 0.6569 | 0.6968 | 0.9667 | nan | 0.4008 | 0.9929 | nan | 0.3477 | 0.9661 |
| 0.2503 | 73.33 | 220 | 0.2999 | 0.6671 | 0.7198 | 0.9659 | nan | 0.4497 | 0.9899 | nan | 0.3689 | 0.9652 |
| 0.2443 | 76.67 | 230 | 0.2816 | 0.6439 | 0.6750 | 0.9668 | nan | 0.3547 | 0.9952 | nan | 0.3215 | 0.9663 |
| 0.3757 | 80.0 | 240 | 0.2907 | 0.6593 | 0.7063 | 0.9659 | nan | 0.4215 | 0.9911 | nan | 0.3535 | 0.9652 |
| 0.2306 | 83.33 | 250 | 0.2767 | 0.6439 | 0.6807 | 0.9658 | nan | 0.3680 | 0.9935 | nan | 0.3226 | 0.9652 |
| 0.2216 | 86.67 | 260 | 0.2792 | 0.6583 | 0.7018 | 0.9663 | nan | 0.4115 | 0.9920 | nan | 0.3509 | 0.9657 |
| 0.3202 | 90.0 | 270 | 0.2681 | 0.6425 | 0.6789 | 0.9657 | nan | 0.3642 | 0.9936 | nan | 0.3199 | 0.9652 |
| 0.2174 | 93.33 | 280 | 0.2633 | 0.6467 | 0.6860 | 0.9657 | nan | 0.3791 | 0.9928 | nan | 0.3284 | 0.9651 |
| 0.2086 | 96.67 | 290 | 0.2658 | 0.6476 | 0.6900 | 0.9652 | nan | 0.3880 | 0.9920 | nan | 0.3306 | 0.9646 |
| 0.2042 | 100.0 | 300 | 0.2651 | 0.6486 | 0.6898 | 0.9655 | nan | 0.3873 | 0.9923 | nan | 0.3322 | 0.9649 |
| 0.2071 | 103.33 | 310 | 0.2597 | 0.6445 | 0.6792 | 0.9662 | nan | 0.3643 | 0.9941 | nan | 0.3233 | 0.9657 |
| 0.2097 | 106.67 | 320 | 0.2596 | 0.6615 | 0.7062 | 0.9665 | nan | 0.4206 | 0.9918 | nan | 0.3571 | 0.9658 |
| 0.3118 | 110.0 | 330 | 0.2557 | 0.6516 | 0.6928 | 0.9659 | nan | 0.3931 | 0.9924 | nan | 0.3380 | 0.9653 |
| 0.1956 | 113.33 | 340 | 0.2517 | 0.6494 | 0.6865 | 0.9664 | nan | 0.3794 | 0.9936 | nan | 0.3331 | 0.9658 |
| 0.201 | 116.67 | 350 | 0.2570 | 0.6573 | 0.7032 | 0.9658 | nan | 0.4151 | 0.9913 | nan | 0.3494 | 0.9651 |
| 0.1952 | 120.0 | 360 | 0.2543 | 0.6541 | 0.6937 | 0.9665 | nan | 0.3944 | 0.9930 | nan | 0.3424 | 0.9659 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
sam1120/dropoff-utcustom-train-SF-RGB-b0_7 | sam1120 | 2024-02-12T12:18:01Z | 145 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2024-02-12T12:10:30Z | ---
license: other
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: dropoff-utcustom-train-SF-RGB-b0_7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dropoff-utcustom-train-SF-RGB-b0_7
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the sam1120/dropoff-utcustom-TRAIN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1457
- Mean Iou: 0.6795
- Mean Accuracy: 0.7207
- Overall Accuracy: 0.9691
- Accuracy Unlabeled: nan
- Accuracy Dropoff: 0.4481
- Accuracy Undropoff: 0.9932
- Iou Unlabeled: nan
- Iou Dropoff: 0.3907
- Iou Undropoff: 0.9684
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 120
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Dropoff | Accuracy Undropoff | Iou Unlabeled | Iou Dropoff | Iou Undropoff |
|:-------------:|:------:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:----------------:|:------------------:|:-------------:|:-----------:|:-------------:|
| 1.1505 | 3.33 | 10 | 1.1103 | 0.1106 | 0.6036 | 0.2919 | nan | 0.9456 | 0.2616 | 0.0 | 0.0703 | 0.2616 |
| 0.9635 | 6.67 | 20 | 1.0114 | 0.3710 | 0.8470 | 0.8737 | nan | 0.8177 | 0.8763 | 0.0 | 0.2435 | 0.8694 |
| 0.9358 | 10.0 | 30 | 0.8242 | 0.4206 | 0.7727 | 0.9440 | nan | 0.5848 | 0.9606 | 0.0 | 0.3194 | 0.9425 |
| 0.579 | 13.33 | 40 | 0.5703 | 0.4525 | 0.7615 | 0.9633 | nan | 0.5402 | 0.9829 | 0.0 | 0.3951 | 0.9624 |
| 0.4411 | 16.67 | 50 | 0.4166 | 0.4529 | 0.7380 | 0.9667 | nan | 0.4872 | 0.9889 | 0.0 | 0.3928 | 0.9659 |
| 0.4311 | 20.0 | 60 | 0.3843 | 0.6678 | 0.7156 | 0.9667 | nan | 0.4400 | 0.9911 | nan | 0.3695 | 0.9661 |
| 0.3437 | 23.33 | 70 | 0.3590 | 0.4347 | 0.6956 | 0.9655 | nan | 0.3995 | 0.9918 | 0.0 | 0.3392 | 0.9649 |
| 0.3136 | 26.67 | 80 | 0.3198 | 0.6259 | 0.6622 | 0.9638 | nan | 0.3312 | 0.9931 | nan | 0.2885 | 0.9633 |
| 0.2682 | 30.0 | 90 | 0.2919 | 0.6187 | 0.6470 | 0.9648 | nan | 0.2984 | 0.9957 | nan | 0.2730 | 0.9643 |
| 0.2521 | 33.33 | 100 | 0.2957 | 0.6448 | 0.6845 | 0.9653 | nan | 0.3764 | 0.9926 | nan | 0.3248 | 0.9648 |
| 0.2287 | 36.67 | 110 | 0.2747 | 0.6800 | 0.7256 | 0.9685 | nan | 0.4591 | 0.9921 | nan | 0.3922 | 0.9678 |
| 0.2203 | 40.0 | 120 | 0.2537 | 0.7108 | 0.7687 | 0.9706 | nan | 0.5472 | 0.9902 | nan | 0.4517 | 0.9699 |
| 0.1964 | 43.33 | 130 | 0.2356 | 0.6689 | 0.7054 | 0.9686 | nan | 0.4167 | 0.9941 | nan | 0.3699 | 0.9680 |
| 0.1776 | 46.67 | 140 | 0.2205 | 0.6729 | 0.7137 | 0.9684 | nan | 0.4343 | 0.9931 | nan | 0.3780 | 0.9677 |
| 0.1675 | 50.0 | 150 | 0.2061 | 0.6809 | 0.7244 | 0.9689 | nan | 0.4562 | 0.9926 | nan | 0.3936 | 0.9682 |
| 0.148 | 53.33 | 160 | 0.1954 | 0.6924 | 0.7418 | 0.9694 | nan | 0.4920 | 0.9915 | nan | 0.4160 | 0.9687 |
| 0.1364 | 56.67 | 170 | 0.1915 | 0.6869 | 0.7415 | 0.9681 | nan | 0.4928 | 0.9902 | nan | 0.4064 | 0.9674 |
| 0.1171 | 60.0 | 180 | 0.1776 | 0.7206 | 0.7816 | 0.9714 | nan | 0.5734 | 0.9899 | nan | 0.4706 | 0.9707 |
| 0.1169 | 63.33 | 190 | 0.1754 | 0.6580 | 0.6853 | 0.9689 | nan | 0.3741 | 0.9965 | nan | 0.3476 | 0.9684 |
| 0.1178 | 66.67 | 200 | 0.1676 | 0.6783 | 0.7233 | 0.9684 | nan | 0.4545 | 0.9922 | nan | 0.3888 | 0.9677 |
| 0.1016 | 70.0 | 210 | 0.1670 | 0.6633 | 0.6985 | 0.9682 | nan | 0.4025 | 0.9944 | nan | 0.3590 | 0.9676 |
| 0.1025 | 73.33 | 220 | 0.1648 | 0.6789 | 0.7154 | 0.9696 | nan | 0.4366 | 0.9943 | nan | 0.3888 | 0.9690 |
| 0.0956 | 76.67 | 230 | 0.1607 | 0.6684 | 0.7103 | 0.9677 | nan | 0.4279 | 0.9927 | nan | 0.3697 | 0.9671 |
| 0.1443 | 80.0 | 240 | 0.1611 | 0.6747 | 0.7134 | 0.9688 | nan | 0.4332 | 0.9937 | nan | 0.3811 | 0.9682 |
| 0.0902 | 83.33 | 250 | 0.1600 | 0.6713 | 0.7060 | 0.9691 | nan | 0.4174 | 0.9946 | nan | 0.3740 | 0.9685 |
| 0.0846 | 86.67 | 260 | 0.1559 | 0.6772 | 0.7263 | 0.9677 | nan | 0.4613 | 0.9912 | nan | 0.3874 | 0.9670 |
| 0.1166 | 90.0 | 270 | 0.1587 | 0.6615 | 0.6984 | 0.9677 | nan | 0.4030 | 0.9939 | nan | 0.3559 | 0.9671 |
| 0.0825 | 93.33 | 280 | 0.1538 | 0.6684 | 0.7068 | 0.9682 | nan | 0.4199 | 0.9936 | nan | 0.3692 | 0.9676 |
| 0.0769 | 96.67 | 290 | 0.1527 | 0.6649 | 0.7033 | 0.9679 | nan | 0.4130 | 0.9936 | nan | 0.3626 | 0.9673 |
| 0.0722 | 100.0 | 300 | 0.1473 | 0.6832 | 0.7247 | 0.9694 | nan | 0.4563 | 0.9932 | nan | 0.3976 | 0.9688 |
| 0.0779 | 103.33 | 310 | 0.1465 | 0.6809 | 0.7200 | 0.9695 | nan | 0.4462 | 0.9937 | nan | 0.3930 | 0.9689 |
| 0.0771 | 106.67 | 320 | 0.1494 | 0.6673 | 0.7052 | 0.9682 | nan | 0.4167 | 0.9937 | nan | 0.3670 | 0.9676 |
| 0.1082 | 110.0 | 330 | 0.1479 | 0.6753 | 0.7182 | 0.9683 | nan | 0.4438 | 0.9926 | nan | 0.3830 | 0.9677 |
| 0.0726 | 113.33 | 340 | 0.1451 | 0.6765 | 0.7159 | 0.9689 | nan | 0.4384 | 0.9935 | nan | 0.3846 | 0.9683 |
| 0.0743 | 116.67 | 350 | 0.1469 | 0.6814 | 0.7249 | 0.9689 | nan | 0.4571 | 0.9927 | nan | 0.3946 | 0.9683 |
| 0.0703 | 120.0 | 360 | 0.1457 | 0.6795 | 0.7207 | 0.9691 | nan | 0.4481 | 0.9932 | nan | 0.3907 | 0.9684 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
paulml/DPOB-NMTOB-7B | paulml | 2024-02-12T12:00:10Z | 56 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"eren23/dpo-binarized-NeutrixOmnibe-7B",
"paulml/OmniBeagleSquaredMBX-v3-7B-v2",
"base_model:eren23/dpo-binarized-NeutrixOmnibe-7B",
"base_model:merge:eren23/dpo-binarized-NeutrixOmnibe-7B",
"base_model:paulml/OmniBeagleSquaredMBX-v3-7B-v2",
"base_model:merge:paulml/OmniBeagleSquaredMBX-v3-7B-v2",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-12T11:56:12Z | ---
tags:
- merge
- mergekit
- lazymergekit
- eren23/dpo-binarized-NeutrixOmnibe-7B
- paulml/OmniBeagleSquaredMBX-v3-7B-v2
base_model:
- eren23/dpo-binarized-NeutrixOmnibe-7B
- paulml/OmniBeagleSquaredMBX-v3-7B-v2
license: cc-by-nc-4.0
---
# DPOB-NMTOB-7B
DPOB-NMTOB-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [eren23/dpo-binarized-NeutrixOmnibe-7B](https://huggingface.co/eren23/dpo-binarized-NeutrixOmnibe-7B)
* [paulml/OmniBeagleSquaredMBX-v3-7B-v2](https://huggingface.co/paulml/OmniBeagleSquaredMBX-v3-7B-v2)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: eren23/dpo-binarized-NeutrixOmnibe-7B
layer_range: [0, 32]
- model: paulml/OmniBeagleSquaredMBX-v3-7B-v2
layer_range: [0, 32]
merge_method: slerp
base_model: eren23/dpo-binarized-NeutrixOmnibe-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "paulml/DPOB-NMTOB-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Subsets and Splits