modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-28 06:27:35
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 500
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-28 06:24:42
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
emmakz/GPT_text_generation | emmakz | 2024-06-21T15:21:55Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-21T14:37:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
basakdemirok/bert-base-multilingual-cased-news_cat_v02 | basakdemirok | 2024-06-21T15:16:07Z | 4 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-21T15:04:37Z | ---
license: apache-2.0
base_model: google-bert/bert-base-multilingual-cased
tags:
- generated_from_keras_callback
model-index:
- name: basakdemirok/bert-base-multilingual-cased-news_cat_v02
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# basakdemirok/bert-base-multilingual-cased-news_cat_v02
This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0405
- Validation Loss: 0.2520
- Train F1: 0.9332
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1492, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train F1 | Epoch |
|:----------:|:---------------:|:--------:|:-----:|
| 0.6533 | 0.3653 | 0.8861 | 0 |
| 0.1946 | 0.3161 | 0.9194 | 1 |
| 0.0891 | 0.2236 | 0.9332 | 2 |
| 0.0405 | 0.2520 | 0.9332 | 3 |
### Framework versions
- Transformers 4.41.2
- TensorFlow 2.13.1
- Datasets 2.4.0
- Tokenizers 0.19.1
|
hchcsuim/batch-size16_Celeb-DF-v2_opencv-1FPS_faces-expand0-aligned_unaugmentation | hchcsuim | 2024-06-21T15:12:20Z | 8 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-06-19T18:11:20Z | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: batch-size16_Celeb-DF-v2_opencv-1FPS_faces-expand0-aligned_unaugmentation
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9943221092129949
- name: Precision
type: precision
value: 0.9971713969472951
- name: Recall
type: recall
value: 0.996544990235842
- name: F1
type: f1
value: 0.9968580951860554
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# batch-size16_Celeb-DF-v2_opencv-1FPS_faces-expand0-aligned_unaugmentation
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0169
- Accuracy: 0.9943
- Precision: 0.9972
- Recall: 0.9965
- F1: 0.9969
- Roc Auc: 0.9993
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| 0.0403 | 1.0 | 1266 | 0.0169 | 0.9943 | 0.9972 | 0.9965 | 0.9969 | 0.9993 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1
- Datasets 2.20.0
- Tokenizers 0.19.1
|
John6666/ebara-pony-v21-sdxl-spo-lora | John6666 | 2024-06-21T15:05:17Z | 468 | 4 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"lora",
"anime",
"pony",
"SPO",
"base_model:John6666/ebara-pony-v21-sdxl",
"base_model:adapter:John6666/ebara-pony-v21-sdxl",
"region:us"
] | text-to-image | 2024-06-21T13:51:58Z | ---
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- lora
- diffusers
- anime
- pony
- SPO
base_model: John6666/ebara-pony-v21-sdxl
---
This repository is for testing [SPO-SDXL LoRA](https://huggingface.co/SPO-Diffusion-Models/SPO-SDXL_4k-p_10ep_LoRA). Simply applying it with a "weight=1.0 (default)" will help produce a high-definition image.
It seems to be slightly prone to disorder in Pony-type models, but this can be mostly avoided by setting "clip skip=2" in your environment.
|
nagrajn/TinyLinuxDSLM67M | nagrajn | 2024-06-21T15:03:44Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-21T15:03:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
John6666/kivotos-xl-v20-spo-lora | John6666 | 2024-06-21T15:03:17Z | 500 | 3 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"lora",
"anime",
"SPO",
"base_model:yodayo-ai/kivotos-xl-2.0",
"base_model:adapter:yodayo-ai/kivotos-xl-2.0",
"region:us"
] | text-to-image | 2024-06-21T11:45:45Z | ---
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- lora
- diffusers
- anime
- SPO
base_model: yodayo-ai/kivotos-xl-2.0
---
This repository is for testing [SPO-SDXL LoRA](https://huggingface.co/SPO-Diffusion-Models/SPO-SDXL_4k-p_10ep_LoRA). Simply applying it with a "weight=1.0 (default)" will help produce a high-definition image.
It seems to be slightly prone to disorder in Pony-type models, but this can be mostly avoided by setting "clip skip=2" in your environment.
|
peruginia/Llama-2-Small | peruginia | 2024-06-21T15:01:19Z | 80 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"italian",
"italiano",
"it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-22T16:45:56Z | ---
language:
- it
pipeline_tag: text-generation
max_length: 100
widget:
- text: Alessandro è un ragazzo che progetta Infissi
- text: Melissa è una ragazza che adora
tags:
- italian
- italiano
- llama
---
This is a train starting from an empty model based exclusively on Italian language datasets (currently redpajama 2023-14 it)<br/>
<br/>
the train is ongoing and will extend to new datasets.<br/>
<br/>
More precise versions will be published shortly.<br/>
<br/>
Train on my server, i have studied and adapted the model starting from the repository https://github.com/karpathy/llama2.c<br/>
<br/>
- LLama model parameter:
- max_seq_len: (7b = 2048) The maximum sequence length for input data.
- dim (7b= 4096) Represents the dimensionalityl
- n_layers: (7b = 32) The number of layers
- n_heads: (7b = 32) Determines the number of attention heads
- n_kv_heads: (7b = 32) The number of key and value heads
- multiple_of: (7b = 256) A value used to make the SwiGLU hidden layer size a multiple of a large power of 2
<br/>
- Model parameter
- max_seq_len = 1024
- dim = 768
- n_layers = 32
- n_heads = 32
- n_kv_heads = 32
- multiple_of = 32
<br/>
num decayed parameter tensors: 225, with 251,068,416 parameters<br/>
num non-decayed parameter tensors: 65, with 49,920 parameters<br/>
To just use the model, you can run:
```py
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
# Load the model and tokenizer
tokenizer_model = AutoTokenizer.from_pretrained("peruginia/Llama-2-Small")
model = AutoModelForCausalLM.from_pretrained("peruginia/Llama-2-Small")
model.to('cuda')
from tokenizer import Tokenizer
# Define the prompt
prompt = "Alessandro è un ragazzo che progetta Infissi"
# Tokenize the prompt
inputs = tokenizer_model(prompt, return_tensors="pt").to('cuda')
# Generate text
output = model.generate(**inputs, do_sample = True, max_new_tokens=100, top_k = 300, top_p = 0.85, temperature = 1.0, num_return_sequences = 1)
# Decode and print the generated text
generated_text = tokenizer_model.decode(output[0], skip_special_tokens=True)
print(generated_text)
```
|
sayanbanerjee32/bengali_tokenizer | sayanbanerjee32 | 2024-06-21T15:00:47Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2024-06-18T18:11:52Z | ---
license: mit
---
# Bengali BPE Tokenizer
## Dataset
Multiple references of raw Bengali corpus are available at this [GitHub link](https://github.com/sagorbrur/bangla-corpus). Used following references from that for gathering raw bengali text for the purpose of training the tokenizer.
- [Tab-delimited Bilingual Sentence Pairs](https://www.manythings.org/anki/) - These are selected sentence pairs from the [Tatoeba Project](http://tatoeba.org/home). This has approximately 6,500 english to bengali sentence pairs. Only Bengali sentences are extracted for training the tokenization
- [IndicParaphrase](https://huggingface.co/datasets/ai4bharat/IndicParaphrase) - Only the input data from validation dataset of [Bengali paraphrases](https://huggingface.co/datasets/ai4bharat/IndicParaphrase/blob/main/data/bn_IndicParaphrase_v1.0.zip) are used for the tokenization. That dataset contains 10,000 Bengali sentences.
## Steps
1. Followed the instructions from the [video](https://youtu.be/zduSFxRajkE) from Andrej Karpathy and created the [notebook](https://github.com/sayanbanerjee32/TASI_ERAv2_S20/blob/main/bengali_bpe_tokenizer_experiment.ipynb) for experiment.
2. Experimented with the regular expression that suits bengali language. The intention was to the regular expression for splitting Bengali words instead of characters.
- Using gpt2 regex `'s|'t|'re|'ve|'m|'ll|'d| ?\p{L}+| ?\p{N}+| ?[^\s\p{L}\p{N}]+|\s+(?!\S)|\s+` was resulting in splitting of individual characters instead of words.
- Used the regex ` ?\p{Bengali}+| ?[^\s\p{Bengali}]+|\s+(?!\S)|\s+` that could split the sentence _"সবাই যা করতে চায় তা করতে চায়নি।"_ into following words _'সবাই', ' যা', ' করতে', ' চায়', ' তা', ' করতে', ' চায়নি', '।'_
3. Updated BPE training process to use text chucks as output from the regular expression splits instead of the complete sentences. This helps avoid merging of tokens across different worlds. [Ref](https://github.com/karpathy/minbpe/blob/master/minbpe/regex.py)
4. Updated `encode` and `decoder` function to deal with text chucks instead of complete sentences. [Ref](https://github.com/karpathy/minbpe/blob/master/minbpe/regex.py)
### Tokenizer training
1. Experimented with the regular expression that suits bengali language. The intention was to the regular expression for splitting Bengali words instead of characters.
- Using gpt2 regex `'s|'t|'re|'ve|'m|'ll|'d| ?\p{L}+| ?\p{N}+| ?[^\s\p{L}\p{N}]+|\s+(?!\S)|\s+` was resulting in splitting of individual characters instead of words.
- Used the regex ` ?\p{Bengali}+| ?[^\s\p{Bengali}]+|\s+(?!\S)|\s+` that could split the sentence _"সবাই যা করতে চায় তা করতে চায়নি।"_ into following words _'সবাই', ' যা', ' করতে', ' চায়', ' তা', ' করতে', ' চায়নি', '।'_
2. Updated BPE training process to use text chucks as output from the regular expression splits instead of the complete sentences. This helps avoid merging of tokens across different worlds. [Ref](https://github.com/karpathy/minbpe/blob/master/minbpe/regex.py)
3. Updated `encode` and `decoder` function to deal with text chucks instead of complete sentences. [Ref](https://github.com/karpathy/minbpe/blob/master/minbpe/regex.py)
4. Trained the tokenizer to reach **vocab size of 5001 and compression of 11X**
5. Saved vocab file (contains the mapping from tokens to bengali text), merges files (contains the mapping from pair of tokens to be merged to token after merging) and the regular expression that is used for splitting the bengali sentences. All these artifacts are required to perform BPE tokenization on a new text.
## The HuggingFace Spaces Gradio App
The app is available [here](https://huggingface.co/spaces/sayanbanerjee32/bengali_bpe_tokenizer)
The App takes one or more Bengali sentences as input provide following outputs
1. Numeric tokens that represent the sentence (using encode function)
2. Regenerated sentence using the tokens (using decode function)
3. A visualization for each token to Bengali text mapping as explanation for the tokenization.
|
damgomz/ft_1_14e6_base_x1 | damgomz | 2024-06-21T14:48:44Z | 17 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-19T16:02:20Z | ---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | [More Information Needed] |
| Emissions (Co2eq in kg) | [More Information Needed] |
| CPU power (W) | [NO CPU] |
| GPU power (W) | [No GPU] |
| RAM power (W) | [More Information Needed] |
| CPU energy (kWh) | [No CPU] |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | [More Information Needed] |
| Consumed energy (kWh) | [More Information Needed] |
| Country name | [More Information Needed] |
| Cloud provider | [No Cloud] |
| Cloud region | [No Cloud] |
| CPU count | [No CPU] |
| CPU model | [No CPU] |
| GPU count | [No GPU] |
| GPU model | [No GPU] |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | [No CPU] |
| Emissions (Co2eq in kg) | [More Information Needed] |
## Note
20 Juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | recovering |
| model_name | ft_1_14e6_base_x1 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.4e-05 |
| batch_size | 1 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 1 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.716583 | 0.500197 |
| 1 | 0.426017 | 0.333397 | 0.911781 |
| 2 | 0.315625 | 0.259415 | 0.928608 |
| 3 | 0.221709 | 0.239553 | 0.915761 |
| 4 | 0.180145 | 0.257902 | 0.918258 |
| 5 | 0.134075 | 0.252331 | 0.915336 |
| 6 | 0.097267 | 0.272010 | 0.919136 |
|
damgomz/ft_1_10e6_base_x1 | damgomz | 2024-06-21T14:48:40Z | 16 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-19T15:53:14Z | ---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | [More Information Needed] |
| Emissions (Co2eq in kg) | [More Information Needed] |
| CPU power (W) | [NO CPU] |
| GPU power (W) | [No GPU] |
| RAM power (W) | [More Information Needed] |
| CPU energy (kWh) | [No CPU] |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | [More Information Needed] |
| Consumed energy (kWh) | [More Information Needed] |
| Country name | [More Information Needed] |
| Cloud provider | [No Cloud] |
| Cloud region | [No Cloud] |
| CPU count | [No CPU] |
| CPU model | [No CPU] |
| GPU count | [No GPU] |
| GPU model | [No GPU] |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | [No CPU] |
| Emissions (Co2eq in kg) | [More Information Needed] |
## Note
20 Juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | recovering |
| model_name | ft_1_10e6_base_x1 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1e-05 |
| batch_size | 1 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 1 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.693479 | 0.525597 |
| 1 | 0.324673 | 0.236507 | 0.924735 |
| 2 | 0.221799 | 0.270642 | 0.874250 |
| 3 | 0.163480 | 0.227963 | 0.938175 |
| 4 | 0.113000 | 0.235100 | 0.933154 |
| 5 | 0.068230 | 0.278275 | 0.923741 |
| 6 | 0.034612 | 0.325356 | 0.921737 |
|
damgomz/ft_8_19e6_base_x12 | damgomz | 2024-06-21T14:48:33Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-19T15:58:25Z | ---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | [More Information Needed] |
| Emissions (Co2eq in kg) | [More Information Needed] |
| CPU power (W) | [NO CPU] |
| GPU power (W) | [No GPU] |
| RAM power (W) | [More Information Needed] |
| CPU energy (kWh) | [No CPU] |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | [More Information Needed] |
| Consumed energy (kWh) | [More Information Needed] |
| Country name | [More Information Needed] |
| Cloud provider | [No Cloud] |
| Cloud region | [No Cloud] |
| CPU count | [No CPU] |
| CPU model | [No CPU] |
| GPU count | [No GPU] |
| GPU model | [No GPU] |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | [No CPU] |
| Emissions (Co2eq in kg) | [More Information Needed] |
## Note
20 Juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | recovering |
| model_name | ft_8_19e6_base_x12 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.9e-05 |
| batch_size | 8 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 1 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.715711 | 0.223253 |
| 1 | 0.330865 | 0.258634 | 0.896287 |
| 2 | 0.223282 | 0.244515 | 0.899956 |
| 3 | 0.182193 | 0.272237 | 0.920689 |
| 4 | 0.146640 | 0.256006 | 0.917274 |
| 5 | 0.110072 | 0.272076 | 0.914731 |
| 6 | 0.084004 | 0.354833 | 0.898870 |
|
damgomz/ft_1_10e6_base_x2 | damgomz | 2024-06-21T14:48:32Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-19T15:57:05Z | ---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | [More Information Needed] |
| Emissions (Co2eq in kg) | [More Information Needed] |
| CPU power (W) | [NO CPU] |
| GPU power (W) | [No GPU] |
| RAM power (W) | [More Information Needed] |
| CPU energy (kWh) | [No CPU] |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | [More Information Needed] |
| Consumed energy (kWh) | [More Information Needed] |
| Country name | [More Information Needed] |
| Cloud provider | [No Cloud] |
| Cloud region | [No Cloud] |
| CPU count | [No CPU] |
| CPU model | [No CPU] |
| GPU count | [No GPU] |
| GPU model | [No GPU] |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | [No CPU] |
| Emissions (Co2eq in kg) | [More Information Needed] |
## Note
20 Juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | recovering |
| model_name | ft_1_10e6_base_x2 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1e-05 |
| batch_size | 1 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 1 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.758807 | 0.673346 |
| 1 | 0.303675 | 0.239336 | 0.909007 |
| 2 | 0.205036 | 0.229961 | 0.914064 |
| 3 | 0.149762 | 0.225234 | 0.932055 |
| 4 | 0.092522 | 0.286175 | 0.932102 |
| 5 | 0.046867 | 0.333840 | 0.920513 |
| 6 | 0.018871 | 0.421377 | 0.918092 |
|
damgomz/ft_4_10e6_base_x4 | damgomz | 2024-06-21T14:48:18Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-19T16:00:34Z | ---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | [More Information Needed] |
| Emissions (Co2eq in kg) | [More Information Needed] |
| CPU power (W) | [NO CPU] |
| GPU power (W) | [No GPU] |
| RAM power (W) | [More Information Needed] |
| CPU energy (kWh) | [No CPU] |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | [More Information Needed] |
| Consumed energy (kWh) | [More Information Needed] |
| Country name | [More Information Needed] |
| Cloud provider | [No Cloud] |
| Cloud region | [No Cloud] |
| CPU count | [No CPU] |
| CPU model | [No CPU] |
| GPU count | [No GPU] |
| GPU model | [No GPU] |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | [No CPU] |
| Emissions (Co2eq in kg) | [More Information Needed] |
## Note
20 Juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | recovering |
| model_name | ft_4_10e6_base_x4 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1e-05 |
| batch_size | 4 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 1 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.761562 | 0.659722 |
| 1 | 0.292765 | 0.221953 | 0.922686 |
| 2 | 0.194319 | 0.228472 | 0.926518 |
| 3 | 0.148464 | 0.241046 | 0.930488 |
| 4 | 0.099623 | 0.271967 | 0.913407 |
| 5 | 0.067218 | 0.324850 | 0.924071 |
| 6 | 0.045785 | 0.375282 | 0.907802 |
|
damgomz/ft_8_12e6_base_x2 | damgomz | 2024-06-21T14:48:17Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-19T16:04:09Z | ---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | [More Information Needed] |
| Emissions (Co2eq in kg) | [More Information Needed] |
| CPU power (W) | [NO CPU] |
| GPU power (W) | [No GPU] |
| RAM power (W) | [More Information Needed] |
| CPU energy (kWh) | [No CPU] |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | [More Information Needed] |
| Consumed energy (kWh) | [More Information Needed] |
| Country name | [More Information Needed] |
| Cloud provider | [No Cloud] |
| Cloud region | [No Cloud] |
| CPU count | [No CPU] |
| CPU model | [No CPU] |
| GPU count | [No GPU] |
| GPU model | [No GPU] |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | [No CPU] |
| Emissions (Co2eq in kg) | [More Information Needed] |
## Note
20 Juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | recovering |
| model_name | ft_8_12e6_base_x2 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.2e-05 |
| batch_size | 8 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 1 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.723395 | 0.464773 |
| 1 | 0.287562 | 0.239798 | 0.913108 |
| 2 | 0.186648 | 0.238462 | 0.931404 |
| 3 | 0.132085 | 0.277007 | 0.889860 |
| 4 | 0.082917 | 0.277404 | 0.931897 |
| 5 | 0.054986 | 0.336345 | 0.909360 |
| 6 | 0.036849 | 0.353699 | 0.916457 |
|
damgomz/ft_8_14e6_base_x8 | damgomz | 2024-06-21T14:48:15Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-19T15:58:34Z | ---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | [More Information Needed] |
| Emissions (Co2eq in kg) | [More Information Needed] |
| CPU power (W) | [NO CPU] |
| GPU power (W) | [No GPU] |
| RAM power (W) | [More Information Needed] |
| CPU energy (kWh) | [No CPU] |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | [More Information Needed] |
| Consumed energy (kWh) | [More Information Needed] |
| Country name | [More Information Needed] |
| Cloud provider | [No Cloud] |
| Cloud region | [No Cloud] |
| CPU count | [No CPU] |
| CPU model | [No CPU] |
| GPU count | [No GPU] |
| GPU model | [No GPU] |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | [No CPU] |
| Emissions (Co2eq in kg) | [More Information Needed] |
## Note
20 Juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | recovering |
| model_name | ft_8_14e6_base_x8 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.4e-05 |
| batch_size | 8 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 1 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.773143 | 0.185776 |
| 1 | 0.299646 | 0.250503 | 0.908211 |
| 2 | 0.202304 | 0.228516 | 0.926588 |
| 3 | 0.152242 | 0.253524 | 0.900976 |
| 4 | 0.113440 | 0.288011 | 0.926897 |
| 5 | 0.074958 | 0.324120 | 0.914874 |
| 6 | 0.053725 | 0.342149 | 0.904773 |
|
damgomz/ft_8_12e6_base_x1 | damgomz | 2024-06-21T14:48:08Z | 17 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-19T16:03:40Z | ---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | [More Information Needed] |
| Emissions (Co2eq in kg) | [More Information Needed] |
| CPU power (W) | [NO CPU] |
| GPU power (W) | [No GPU] |
| RAM power (W) | [More Information Needed] |
| CPU energy (kWh) | [No CPU] |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | [More Information Needed] |
| Consumed energy (kWh) | [More Information Needed] |
| Country name | [More Information Needed] |
| Cloud provider | [No Cloud] |
| Cloud region | [No Cloud] |
| CPU count | [No CPU] |
| CPU model | [No CPU] |
| GPU count | [No GPU] |
| GPU model | [No GPU] |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | [No CPU] |
| Emissions (Co2eq in kg) | [More Information Needed] |
## Note
20 Juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | recovering |
| model_name | ft_8_12e6_base_x1 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.2e-05 |
| batch_size | 8 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 1 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.762920 | 0.639119 |
| 1 | 0.297602 | 0.219547 | 0.898321 |
| 2 | 0.190407 | 0.212052 | 0.917591 |
| 3 | 0.144319 | 0.210398 | 0.921219 |
| 4 | 0.111687 | 0.252235 | 0.913619 |
| 5 | 0.078434 | 0.253483 | 0.929714 |
| 6 | 0.052000 | 0.310400 | 0.899860 |
|
damgomz/ft_8_12e6_base_x12 | damgomz | 2024-06-21T14:48:06Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-19T15:56:28Z | ---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | [More Information Needed] |
| Emissions (Co2eq in kg) | [More Information Needed] |
| CPU power (W) | [NO CPU] |
| GPU power (W) | [No GPU] |
| RAM power (W) | [More Information Needed] |
| CPU energy (kWh) | [No CPU] |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | [More Information Needed] |
| Consumed energy (kWh) | [More Information Needed] |
| Country name | [More Information Needed] |
| Cloud provider | [No Cloud] |
| Cloud region | [No Cloud] |
| CPU count | [No CPU] |
| CPU model | [No CPU] |
| GPU count | [No GPU] |
| GPU model | [No GPU] |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | [No CPU] |
| Emissions (Co2eq in kg) | [More Information Needed] |
## Note
20 Juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | recovering |
| model_name | ft_8_12e6_base_x12 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.2e-05 |
| batch_size | 8 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 1 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.747532 | 0.336758 |
| 1 | 0.329673 | 0.241658 | 0.911435 |
| 2 | 0.219469 | 0.240997 | 0.915746 |
| 3 | 0.179079 | 0.219012 | 0.922879 |
| 4 | 0.143406 | 0.256917 | 0.924351 |
| 5 | 0.111881 | 0.263345 | 0.908362 |
| 6 | 0.080572 | 0.333107 | 0.903060 |
|
damgomz/ft_8_11e6_base_x4 | damgomz | 2024-06-21T14:48:05Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-19T16:03:22Z | ---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | [More Information Needed] |
| Emissions (Co2eq in kg) | [More Information Needed] |
| CPU power (W) | [NO CPU] |
| GPU power (W) | [No GPU] |
| RAM power (W) | [More Information Needed] |
| CPU energy (kWh) | [No CPU] |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | [More Information Needed] |
| Consumed energy (kWh) | [More Information Needed] |
| Country name | [More Information Needed] |
| Cloud provider | [No Cloud] |
| Cloud region | [No Cloud] |
| CPU count | [No CPU] |
| CPU model | [No CPU] |
| GPU count | [No GPU] |
| GPU model | [No GPU] |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | [No CPU] |
| Emissions (Co2eq in kg) | [More Information Needed] |
## Note
20 Juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | recovering |
| model_name | ft_8_11e6_base_x4 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.1e-05 |
| batch_size | 8 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 1 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.772248 | 0.554160 |
| 1 | 0.298749 | 0.233680 | 0.906756 |
| 2 | 0.190520 | 0.218291 | 0.919032 |
| 3 | 0.139905 | 0.254326 | 0.911552 |
| 4 | 0.092589 | 0.309714 | 0.912312 |
| 5 | 0.059291 | 0.319556 | 0.920052 |
| 6 | 0.038939 | 0.350370 | 0.896027 |
|
damgomz/ft_4_15e6_base_x8 | damgomz | 2024-06-21T14:48:04Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-19T16:01:40Z | ---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | [More Information Needed] |
| Emissions (Co2eq in kg) | [More Information Needed] |
| CPU power (W) | [NO CPU] |
| GPU power (W) | [No GPU] |
| RAM power (W) | [More Information Needed] |
| CPU energy (kWh) | [No CPU] |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | [More Information Needed] |
| Consumed energy (kWh) | [More Information Needed] |
| Country name | [More Information Needed] |
| Cloud provider | [No Cloud] |
| Cloud region | [No Cloud] |
| CPU count | [No CPU] |
| CPU model | [No CPU] |
| GPU count | [No GPU] |
| GPU model | [No GPU] |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | [No CPU] |
| Emissions (Co2eq in kg) | [More Information Needed] |
## Note
20 Juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | recovering |
| model_name | ft_4_15e6_base_x8 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.5e-05 |
| batch_size | 4 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 1 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.699569 | 0.628879 |
| 1 | 0.304339 | 0.235223 | 0.925052 |
| 2 | 0.210827 | 0.253368 | 0.912372 |
| 3 | 0.163374 | 0.226955 | 0.924966 |
| 4 | 0.121055 | 0.296473 | 0.919310 |
| 5 | 0.092410 | 0.319336 | 0.899213 |
| 6 | 0.066765 | 0.347538 | 0.911288 |
|
damgomz/ft_8_16e6_base_x4 | damgomz | 2024-06-21T14:48:00Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-19T15:57:04Z | ---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | [More Information Needed] |
| Emissions (Co2eq in kg) | [More Information Needed] |
| CPU power (W) | [NO CPU] |
| GPU power (W) | [No GPU] |
| RAM power (W) | [More Information Needed] |
| CPU energy (kWh) | [No CPU] |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | [More Information Needed] |
| Consumed energy (kWh) | [More Information Needed] |
| Country name | [More Information Needed] |
| Cloud provider | [No Cloud] |
| Cloud region | [No Cloud] |
| CPU count | [No CPU] |
| CPU model | [No CPU] |
| GPU count | [No GPU] |
| GPU model | [No GPU] |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | [No CPU] |
| Emissions (Co2eq in kg) | [More Information Needed] |
## Note
20 Juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | recovering |
| model_name | ft_8_16e6_base_x4 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.6e-05 |
| batch_size | 8 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 1 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.759085 | 0.354395 |
| 1 | 0.300971 | 0.228009 | 0.927768 |
| 2 | 0.199063 | 0.241348 | 0.936338 |
| 3 | 0.147125 | 0.273964 | 0.923665 |
| 4 | 0.103946 | 0.288027 | 0.917346 |
| 5 | 0.074450 | 0.303846 | 0.911760 |
| 6 | 0.051962 | 0.379501 | 0.921824 |
|
damgomz/ft_4_10e6_base_x2 | damgomz | 2024-06-21T14:47:58Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-19T15:58:13Z | ---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | [More Information Needed] |
| Emissions (Co2eq in kg) | [More Information Needed] |
| CPU power (W) | [NO CPU] |
| GPU power (W) | [No GPU] |
| RAM power (W) | [More Information Needed] |
| CPU energy (kWh) | [No CPU] |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | [More Information Needed] |
| Consumed energy (kWh) | [More Information Needed] |
| Country name | [More Information Needed] |
| Cloud provider | [No Cloud] |
| Cloud region | [No Cloud] |
| CPU count | [No CPU] |
| CPU model | [No CPU] |
| GPU count | [No GPU] |
| GPU model | [No GPU] |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | [No CPU] |
| Emissions (Co2eq in kg) | [More Information Needed] |
## Note
20 Juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | recovering |
| model_name | ft_4_10e6_base_x2 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1e-05 |
| batch_size | 4 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 1 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.751640 | 0.509594 |
| 1 | 0.285416 | 0.221035 | 0.900172 |
| 2 | 0.186811 | 0.229986 | 0.915329 |
| 3 | 0.131754 | 0.250083 | 0.925393 |
| 4 | 0.082792 | 0.292843 | 0.911546 |
| 5 | 0.050826 | 0.350124 | 0.915397 |
| 6 | 0.037558 | 0.361182 | 0.939171 |
|
damgomz/ft_8_14e6_base_x12 | damgomz | 2024-06-21T14:47:55Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-19T15:59:37Z | ---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | [More Information Needed] |
| Emissions (Co2eq in kg) | [More Information Needed] |
| CPU power (W) | [NO CPU] |
| GPU power (W) | [No GPU] |
| RAM power (W) | [More Information Needed] |
| CPU energy (kWh) | [No CPU] |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | [More Information Needed] |
| Consumed energy (kWh) | [More Information Needed] |
| Country name | [More Information Needed] |
| Cloud provider | [No Cloud] |
| Cloud region | [No Cloud] |
| CPU count | [No CPU] |
| CPU model | [No CPU] |
| GPU count | [No GPU] |
| GPU model | [No GPU] |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | [No CPU] |
| Emissions (Co2eq in kg) | [More Information Needed] |
## Note
20 Juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | recovering |
| model_name | ft_8_14e6_base_x12 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.4e-05 |
| batch_size | 8 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 1 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.722228 | 0.493265 |
| 1 | 0.328708 | 0.249943 | 0.929806 |
| 2 | 0.218188 | 0.229234 | 0.905373 |
| 3 | 0.178946 | 0.220918 | 0.923382 |
| 4 | 0.139944 | 0.245058 | 0.916765 |
| 5 | 0.108037 | 0.285771 | 0.909685 |
| 6 | 0.078401 | 0.321646 | 0.903681 |
|
damgomz/ft_8_15e6_base_x2 | damgomz | 2024-06-21T14:47:54Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-19T16:03:24Z | ---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | [More Information Needed] |
| Emissions (Co2eq in kg) | [More Information Needed] |
| CPU power (W) | [NO CPU] |
| GPU power (W) | [No GPU] |
| RAM power (W) | [More Information Needed] |
| CPU energy (kWh) | [No CPU] |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | [More Information Needed] |
| Consumed energy (kWh) | [More Information Needed] |
| Country name | [More Information Needed] |
| Cloud provider | [No Cloud] |
| Cloud region | [No Cloud] |
| CPU count | [No CPU] |
| CPU model | [No CPU] |
| GPU count | [No GPU] |
| GPU model | [No GPU] |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | [No CPU] |
| Emissions (Co2eq in kg) | [More Information Needed] |
## Note
20 Juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | recovering |
| model_name | ft_8_15e6_base_x2 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.5e-05 |
| batch_size | 8 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 1 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.729386 | 0.323580 |
| 1 | 0.286663 | 0.216003 | 0.925160 |
| 2 | 0.181589 | 0.217169 | 0.921080 |
| 3 | 0.127750 | 0.261901 | 0.910692 |
| 4 | 0.085803 | 0.280536 | 0.924114 |
| 5 | 0.050979 | 0.305894 | 0.910637 |
| 6 | 0.036369 | 0.354458 | 0.937653 |
|
damgomz/ft_8_13e6_base_x8 | damgomz | 2024-06-21T14:47:46Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-20T15:50:54Z | ---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | [More Information Needed] |
| Emissions (Co2eq in kg) | [More Information Needed] |
| CPU power (W) | [NO CPU] |
| GPU power (W) | [No GPU] |
| RAM power (W) | [More Information Needed] |
| CPU energy (kWh) | [No CPU] |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | [More Information Needed] |
| Consumed energy (kWh) | [More Information Needed] |
| Country name | [More Information Needed] |
| Cloud provider | [No Cloud] |
| Cloud region | [No Cloud] |
| CPU count | [No CPU] |
| CPU model | [No CPU] |
| GPU count | [No GPU] |
| GPU model | [No GPU] |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | [No CPU] |
| Emissions (Co2eq in kg) | [More Information Needed] |
## Note
20 Juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | recovering |
| model_name | ft_8_13e6_base_x8 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.3e-05 |
| batch_size | 8 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 1 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.691509 | 0.294718 |
| 1 | 0.304281 | 0.238582 | 0.910816 |
| 2 | 0.203415 | 0.231440 | 0.909372 |
| 3 | 0.158955 | 0.242655 | 0.918669 |
| 4 | 0.112334 | 0.296292 | 0.922092 |
| 5 | 0.076614 | 0.319123 | 0.913993 |
| 6 | 0.053686 | 0.330590 | 0.912833 |
|
damgomz/ft_8_10e6_base_x1 | damgomz | 2024-06-21T14:47:42Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-19T16:02:50Z | ---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | [More Information Needed] |
| Emissions (Co2eq in kg) | [More Information Needed] |
| CPU power (W) | [NO CPU] |
| GPU power (W) | [No GPU] |
| RAM power (W) | [More Information Needed] |
| CPU energy (kWh) | [No CPU] |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | [More Information Needed] |
| Consumed energy (kWh) | [More Information Needed] |
| Country name | [More Information Needed] |
| Cloud provider | [No Cloud] |
| Cloud region | [No Cloud] |
| CPU count | [No CPU] |
| CPU model | [No CPU] |
| GPU count | [No GPU] |
| GPU model | [No GPU] |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | [No CPU] |
| Emissions (Co2eq in kg) | [More Information Needed] |
## Note
20 Juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | recovering |
| model_name | ft_8_10e6_base_x1 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1e-05 |
| batch_size | 8 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 1 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.708585 | 0.416996 |
| 1 | 0.295525 | 0.207538 | 0.923839 |
| 2 | 0.186544 | 0.208089 | 0.935072 |
| 3 | 0.134150 | 0.229358 | 0.917265 |
| 4 | 0.090733 | 0.252686 | 0.919996 |
| 5 | 0.059339 | 0.290969 | 0.915235 |
| 6 | 0.036099 | 0.336024 | 0.919988 |
|
damgomz/ft_8_14e6_base_x1 | damgomz | 2024-06-21T14:47:33Z | 16 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-19T15:51:51Z | ---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | [More Information Needed] |
| Emissions (Co2eq in kg) | [More Information Needed] |
| CPU power (W) | [NO CPU] |
| GPU power (W) | [No GPU] |
| RAM power (W) | [More Information Needed] |
| CPU energy (kWh) | [No CPU] |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | [More Information Needed] |
| Consumed energy (kWh) | [More Information Needed] |
| Country name | [More Information Needed] |
| Cloud provider | [No Cloud] |
| Cloud region | [No Cloud] |
| CPU count | [No CPU] |
| CPU model | [No CPU] |
| GPU count | [No GPU] |
| GPU model | [No GPU] |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | [No CPU] |
| Emissions (Co2eq in kg) | [More Information Needed] |
## Note
20 Juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | recovering |
| model_name | ft_8_14e6_base_x1 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.4e-05 |
| batch_size | 8 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 1 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.788339 | 0.670834 |
| 1 | 0.309520 | 0.221475 | 0.927842 |
| 2 | 0.198204 | 0.226544 | 0.930295 |
| 3 | 0.160122 | 0.219713 | 0.939555 |
| 4 | 0.132114 | 0.279475 | 0.924528 |
| 5 | 0.112628 | 0.274247 | 0.913619 |
| 6 | 0.075357 | 0.261967 | 0.929987 |
|
damgomz/ft_8_10e6_base_x2 | damgomz | 2024-06-21T14:47:29Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-19T16:03:07Z | ---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | [More Information Needed] |
| Emissions (Co2eq in kg) | [More Information Needed] |
| CPU power (W) | [NO CPU] |
| GPU power (W) | [No GPU] |
| RAM power (W) | [More Information Needed] |
| CPU energy (kWh) | [No CPU] |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | [More Information Needed] |
| Consumed energy (kWh) | [More Information Needed] |
| Country name | [More Information Needed] |
| Cloud provider | [No Cloud] |
| Cloud region | [No Cloud] |
| CPU count | [No CPU] |
| CPU model | [No CPU] |
| GPU count | [No GPU] |
| GPU model | [No GPU] |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | [No CPU] |
| Emissions (Co2eq in kg) | [More Information Needed] |
## Note
20 Juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | recovering |
| model_name | ft_8_10e6_base_x2 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1e-05 |
| batch_size | 8 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 1 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.716914 | 0.467868 |
| 1 | 0.289505 | 0.219820 | 0.920435 |
| 2 | 0.178071 | 0.226766 | 0.904042 |
| 3 | 0.122657 | 0.248341 | 0.921075 |
| 4 | 0.075919 | 0.262197 | 0.921148 |
| 5 | 0.050032 | 0.328335 | 0.928469 |
| 6 | 0.032296 | 0.382334 | 0.933936 |
|
damgomz/ft_8_13e6_base_x2 | damgomz | 2024-06-21T14:47:14Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-19T16:02:50Z | ---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | [More Information Needed] |
| Emissions (Co2eq in kg) | [More Information Needed] |
| CPU power (W) | [NO CPU] |
| GPU power (W) | [No GPU] |
| RAM power (W) | [More Information Needed] |
| CPU energy (kWh) | [No CPU] |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | [More Information Needed] |
| Consumed energy (kWh) | [More Information Needed] |
| Country name | [More Information Needed] |
| Cloud provider | [No Cloud] |
| Cloud region | [No Cloud] |
| CPU count | [No CPU] |
| CPU model | [No CPU] |
| GPU count | [No GPU] |
| GPU model | [No GPU] |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | [No CPU] |
| Emissions (Co2eq in kg) | [More Information Needed] |
## Note
20 Juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | recovering |
| model_name | ft_8_13e6_base_x2 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.3e-05 |
| batch_size | 8 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 1 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.785084 | 0.331947 |
| 1 | 0.284889 | 0.240138 | 0.908974 |
| 2 | 0.185287 | 0.221502 | 0.928397 |
| 3 | 0.129239 | 0.231947 | 0.909701 |
| 4 | 0.080333 | 0.292352 | 0.924033 |
| 5 | 0.045982 | 0.328297 | 0.928654 |
| 6 | 0.037873 | 0.349078 | 0.917282 |
|
damgomz/ft_8_17e6_base_x1 | damgomz | 2024-06-21T14:47:03Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-19T16:03:47Z | ---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | [More Information Needed] |
| Emissions (Co2eq in kg) | [More Information Needed] |
| CPU power (W) | [NO CPU] |
| GPU power (W) | [No GPU] |
| RAM power (W) | [More Information Needed] |
| CPU energy (kWh) | [No CPU] |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | [More Information Needed] |
| Consumed energy (kWh) | [More Information Needed] |
| Country name | [More Information Needed] |
| Cloud provider | [No Cloud] |
| Cloud region | [No Cloud] |
| CPU count | [No CPU] |
| CPU model | [No CPU] |
| GPU count | [No GPU] |
| GPU model | [No GPU] |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | [No CPU] |
| Emissions (Co2eq in kg) | [More Information Needed] |
## Note
20 Juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | recovering |
| model_name | ft_8_17e6_base_x1 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.7e-05 |
| batch_size | 8 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 1 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.704053 | 0.601444 |
| 1 | 0.298288 | 0.236453 | 0.910655 |
| 2 | 0.205723 | 0.209517 | 0.920183 |
| 3 | 0.159341 | 0.248067 | 0.901406 |
| 4 | 0.147552 | 0.315137 | 0.892548 |
| 5 | 0.106366 | 0.284153 | 0.929177 |
| 6 | 0.075105 | 0.295876 | 0.918940 |
|
damgomz/ft_8_14e6_base_x4 | damgomz | 2024-06-21T14:47:00Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-19T15:57:24Z | ---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | [More Information Needed] |
| Emissions (Co2eq in kg) | [More Information Needed] |
| CPU power (W) | [NO CPU] |
| GPU power (W) | [No GPU] |
| RAM power (W) | [More Information Needed] |
| CPU energy (kWh) | [No CPU] |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | [More Information Needed] |
| Consumed energy (kWh) | [More Information Needed] |
| Country name | [More Information Needed] |
| Cloud provider | [No Cloud] |
| Cloud region | [No Cloud] |
| CPU count | [No CPU] |
| CPU model | [No CPU] |
| GPU count | [No GPU] |
| GPU model | [No GPU] |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | [No CPU] |
| Emissions (Co2eq in kg) | [More Information Needed] |
## Note
20 Juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | recovering |
| model_name | ft_8_14e6_base_x4 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.4e-05 |
| batch_size | 8 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 1 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.727324 | 0.427053 |
| 1 | 0.294848 | 0.236384 | 0.936110 |
| 2 | 0.194801 | 0.234649 | 0.908781 |
| 3 | 0.140616 | 0.270551 | 0.927666 |
| 4 | 0.095706 | 0.278984 | 0.921705 |
| 5 | 0.066307 | 0.290005 | 0.918060 |
| 6 | 0.042936 | 0.377610 | 0.921159 |
|
damgomz/ft_8_19e6_base_x1 | damgomz | 2024-06-21T14:46:50Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-19T16:03:35Z | ---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | [More Information Needed] |
| Emissions (Co2eq in kg) | [More Information Needed] |
| CPU power (W) | [NO CPU] |
| GPU power (W) | [No GPU] |
| RAM power (W) | [More Information Needed] |
| CPU energy (kWh) | [No CPU] |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | [More Information Needed] |
| Consumed energy (kWh) | [More Information Needed] |
| Country name | [More Information Needed] |
| Cloud provider | [No Cloud] |
| Cloud region | [No Cloud] |
| CPU count | [No CPU] |
| CPU model | [No CPU] |
| GPU count | [No GPU] |
| GPU model | [No GPU] |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | [No CPU] |
| Emissions (Co2eq in kg) | [More Information Needed] |
## Note
20 Juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | recovering |
| model_name | ft_8_19e6_base_x1 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.9e-05 |
| batch_size | 8 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 1 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.724836 | 0.423792 |
| 1 | 0.309712 | 0.228687 | 0.927404 |
| 2 | 0.205818 | 0.234855 | 0.914747 |
| 3 | 0.165742 | 0.232025 | 0.917669 |
| 4 | 0.126841 | 0.233678 | 0.917117 |
| 5 | 0.098474 | 0.320131 | 0.930652 |
| 6 | 0.078062 | 0.264216 | 0.920674 |
|
damgomz/ft_1_15e6_base_x4 | damgomz | 2024-06-21T14:46:40Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-19T16:14:56Z | ---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | [More Information Needed] |
| Emissions (Co2eq in kg) | [More Information Needed] |
| CPU power (W) | [NO CPU] |
| GPU power (W) | [No GPU] |
| RAM power (W) | [More Information Needed] |
| CPU energy (kWh) | [No CPU] |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | [More Information Needed] |
| Consumed energy (kWh) | [More Information Needed] |
| Country name | [More Information Needed] |
| Cloud provider | [No Cloud] |
| Cloud region | [No Cloud] |
| CPU count | [No CPU] |
| CPU model | [No CPU] |
| GPU count | [No GPU] |
| GPU model | [No GPU] |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | [No CPU] |
| Emissions (Co2eq in kg) | [More Information Needed] |
## Note
20 Juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | recovering |
| model_name | ft_1_15e6_base_x4 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.5e-05 |
| batch_size | 1 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 1 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.697463 | 0.537197 |
| 1 | 0.419611 | 0.351846 | 0.724784 |
| 2 | 0.346324 | 0.339727 | 0.719493 |
| 3 | 0.297063 | 0.334531 | 0.746933 |
| 4 | 0.214687 | 0.254192 | 0.920994 |
| 5 | 0.135253 | 0.260620 | 0.913912 |
| 6 | 0.096885 | 0.292463 | 0.910672 |
|
damgomz/ft_8_19e6_base_x2 | damgomz | 2024-06-21T14:46:39Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-19T16:03:54Z | ---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | [More Information Needed] |
| Emissions (Co2eq in kg) | [More Information Needed] |
| CPU power (W) | [NO CPU] |
| GPU power (W) | [No GPU] |
| RAM power (W) | [More Information Needed] |
| CPU energy (kWh) | [No CPU] |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | [More Information Needed] |
| Consumed energy (kWh) | [More Information Needed] |
| Country name | [More Information Needed] |
| Cloud provider | [No Cloud] |
| Cloud region | [No Cloud] |
| CPU count | [No CPU] |
| CPU model | [No CPU] |
| GPU count | [No GPU] |
| GPU model | [No GPU] |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | [No CPU] |
| Emissions (Co2eq in kg) | [More Information Needed] |
## Note
20 Juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | recovering |
| model_name | ft_8_19e6_base_x2 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.9e-05 |
| batch_size | 8 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 1 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.707489 | 0.748562 |
| 1 | 0.297821 | 0.252767 | 0.887250 |
| 2 | 0.193647 | 0.242212 | 0.932282 |
| 3 | 0.140631 | 0.248781 | 0.931629 |
| 4 | 0.104267 | 0.272819 | 0.916245 |
| 5 | 0.069640 | 0.329189 | 0.925932 |
| 6 | 0.052564 | 0.301648 | 0.909506 |
|
damgomz/ft_2_13e6_base_x4 | damgomz | 2024-06-21T14:46:37Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-19T16:03:55Z | ---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | [More Information Needed] |
| Emissions (Co2eq in kg) | [More Information Needed] |
| CPU power (W) | [NO CPU] |
| GPU power (W) | [No GPU] |
| RAM power (W) | [More Information Needed] |
| CPU energy (kWh) | [No CPU] |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | [More Information Needed] |
| Consumed energy (kWh) | [More Information Needed] |
| Country name | [More Information Needed] |
| Cloud provider | [No Cloud] |
| Cloud region | [No Cloud] |
| CPU count | [No CPU] |
| CPU model | [No CPU] |
| GPU count | [No GPU] |
| GPU model | [No GPU] |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | [No CPU] |
| Emissions (Co2eq in kg) | [More Information Needed] |
## Note
20 Juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | recovering |
| model_name | ft_2_13e6_base_x4 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.3e-05 |
| batch_size | 2 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 1 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.765464 | 0.500562 |
| 1 | 0.316306 | 0.246949 | 0.930889 |
| 2 | 0.224885 | 0.232140 | 0.928463 |
| 3 | 0.172514 | 0.232513 | 0.917546 |
| 4 | 0.128943 | 0.251931 | 0.925488 |
| 5 | 0.083987 | 0.312494 | 0.916673 |
| 6 | 0.059381 | 0.342837 | 0.916191 |
|
damgomz/ft_2_14e6_base_x4 | damgomz | 2024-06-21T14:46:29Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-19T16:15:37Z | ---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | [More Information Needed] |
| Emissions (Co2eq in kg) | [More Information Needed] |
| CPU power (W) | [NO CPU] |
| GPU power (W) | [No GPU] |
| RAM power (W) | [More Information Needed] |
| CPU energy (kWh) | [No CPU] |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | [More Information Needed] |
| Consumed energy (kWh) | [More Information Needed] |
| Country name | [More Information Needed] |
| Cloud provider | [No Cloud] |
| Cloud region | [No Cloud] |
| CPU count | [No CPU] |
| CPU model | [No CPU] |
| GPU count | [No GPU] |
| GPU model | [No GPU] |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | [No CPU] |
| Emissions (Co2eq in kg) | [More Information Needed] |
## Note
20 Juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | recovering |
| model_name | ft_2_14e6_base_x4 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.4e-05 |
| batch_size | 2 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 1 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.697627 | 0.382422 |
| 1 | 0.326044 | 0.261927 | 0.905944 |
| 2 | 0.228978 | 0.248462 | 0.923464 |
| 3 | 0.179889 | 0.240959 | 0.920381 |
| 4 | 0.133852 | 0.259695 | 0.936355 |
| 5 | 0.101323 | 0.298263 | 0.916265 |
| 6 | 0.075174 | 0.339868 | 0.921268 |
|
jhoppanne/Dogs-Breed-Image-Classification-V1 | jhoppanne | 2024-06-21T14:35:39Z | 16 | 0 | transformers | [
"transformers",
"safetensors",
"resnet",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/resnet-101",
"base_model:finetune:microsoft/resnet-101",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-06-20T16:00:30Z | ---
license: apache-2.0
base_model: microsoft/resnet-101
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: Dogs-Breed-Image-Classification-V1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8757971454600668
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Dogs-Breed-Image-Classification-V1
This model is a fine-tuned version of [microsoft/resnet-101](https://huggingface.co/microsoft/resnet-101) on the [Standford dogs dataset](https://www.kaggle.com/datasets/jessicali9530/stanford-dogs-dataset.).
It achieves the following results on the evaluation set:
- Loss: 0.4469
- Accuracy: 0.8758
## Model description
[Link to the fine-tuned model using resnet-50](https://huggingface.co/jhoppanne/Dogs-Breed-Image-Classification-V0)
This model was trained using dataset from [Kaggle - Standford dogs dataset](https://www.kaggle.com/datasets/jessicali9530/stanford-dogs-dataset)
Quotes from the website:
The Stanford Dogs dataset contains images of 120 breeds of dogs from around the world. This dataset has been built using images and annotation from ImageNet for the task of fine-grained image categorization. It was originally collected for fine-grain image categorization, a challenging problem as certain dog breeds have near identical features or differ in colour and age.
citation:
Aditya Khosla, Nityananda Jayadevaprakash, Bangpeng Yao and Li Fei-Fei. Novel dataset for Fine-Grained Image Categorization. First Workshop on Fine-Grained Visual Categorization (FGVC), IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011. [pdf] [poster] [BibTex]
Secondary:
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li and L. Fei-Fei, ImageNet: A Large-Scale Hierarchical Image Database. IEEE Computer Vision and Pattern Recognition (CVPR), 2009. [pdf] [BibTex]
## Intended uses & limitations
This model is fined tune solely for classifiying 120 species of dogs.
## Training and evaluation data
75% training data, 25% testing data.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 309 | 18.7685 | 0.0091 |
| 18.7211 | 2.0 | 618 | 18.5975 | 0.0091 |
| 18.7211 | 3.0 | 927 | 17.4087 | 0.0091 |
| 15.4274 | 4.0 | 1236 | 11.8712 | 0.0091 |
| 10.3252 | 5.0 | 1545 | 6.6642 | 0.0091 |
| 10.3252 | 6.0 | 1854 | 5.2754 | 0.0112 |
| 6.2268 | 7.0 | 2163 | 4.8454 | 0.0158 |
| 6.2268 | 8.0 | 2472 | 4.7658 | 0.0140 |
| 4.9682 | 9.0 | 2781 | 4.6860 | 0.0234 |
| 4.7245 | 10.0 | 3090 | 4.6165 | 0.0316 |
| 4.7245 | 11.0 | 3399 | 4.5349 | 0.0446 |
| 4.5441 | 12.0 | 3708 | 4.4555 | 0.0623 |
| 4.3912 | 13.0 | 4017 | 4.3437 | 0.0862 |
| 4.3912 | 14.0 | 4326 | 4.2182 | 0.1330 |
| 4.2211 | 15.0 | 4635 | 4.0752 | 0.2153 |
| 4.2211 | 16.0 | 4944 | 3.9803 | 0.2599 |
| 3.9762 | 17.0 | 5253 | 3.7347 | 0.3596 |
| 3.69 | 18.0 | 5562 | 3.5493 | 0.4194 |
| 3.69 | 19.0 | 5871 | 3.3404 | 0.4813 |
| 3.3803 | 20.0 | 6180 | 3.1122 | 0.5600 |
| 3.3803 | 21.0 | 6489 | 2.8656 | 0.6101 |
| 3.0345 | 22.0 | 6798 | 2.6544 | 0.6462 |
| 2.6793 | 23.0 | 7107 | 2.4178 | 0.6647 |
| 2.6793 | 24.0 | 7416 | 2.1967 | 0.7121 |
| 2.3251 | 25.0 | 7725 | 2.0091 | 0.7203 |
| 1.9975 | 26.0 | 8034 | 1.8189 | 0.7464 |
| 1.9975 | 27.0 | 8343 | 1.6537 | 0.7519 |
| 1.7009 | 28.0 | 8652 | 1.4413 | 0.7880 |
| 1.7009 | 29.0 | 8961 | 1.3137 | 0.7968 |
| 1.4494 | 30.0 | 9270 | 1.2150 | 0.7929 |
| 1.2389 | 31.0 | 9579 | 1.1238 | 0.8041 |
| 1.2389 | 32.0 | 9888 | 1.0215 | 0.8208 |
| 1.0646 | 33.0 | 10197 | 0.9637 | 0.8190 |
| 0.9319 | 34.0 | 10506 | 0.8891 | 0.8299 |
| 0.9319 | 35.0 | 10815 | 0.8520 | 0.8330 |
| 0.8297 | 36.0 | 11124 | 0.8212 | 0.8400 |
| 0.8297 | 37.0 | 11433 | 0.7579 | 0.8415 |
| 0.7293 | 38.0 | 11742 | 0.7254 | 0.8454 |
| 0.6657 | 39.0 | 12051 | 0.7019 | 0.8457 |
| 0.6657 | 40.0 | 12360 | 0.6669 | 0.8527 |
| 0.6047 | 41.0 | 12669 | 0.6510 | 0.8530 |
| 0.6047 | 42.0 | 12978 | 0.6264 | 0.8545 |
| 0.557 | 43.0 | 13287 | 0.6275 | 0.8506 |
| 0.5126 | 44.0 | 13596 | 0.5947 | 0.8536 |
| 0.5126 | 45.0 | 13905 | 0.5860 | 0.8573 |
| 0.475 | 46.0 | 14214 | 0.5745 | 0.8545 |
| 0.4406 | 47.0 | 14523 | 0.5579 | 0.8600 |
| 0.4406 | 48.0 | 14832 | 0.5386 | 0.8621 |
| 0.4086 | 49.0 | 15141 | 0.5346 | 0.8624 |
| 0.4086 | 50.0 | 15450 | 0.5200 | 0.8612 |
| 0.3882 | 51.0 | 15759 | 0.5233 | 0.8612 |
| 0.3646 | 52.0 | 16068 | 0.5148 | 0.8640 |
| 0.3646 | 53.0 | 16377 | 0.5078 | 0.8679 |
| 0.3386 | 54.0 | 16686 | 0.5067 | 0.8646 |
| 0.3386 | 55.0 | 16995 | 0.4976 | 0.8673 |
| 0.3208 | 56.0 | 17304 | 0.4934 | 0.8682 |
| 0.3039 | 57.0 | 17613 | 0.4849 | 0.8688 |
| 0.3039 | 58.0 | 17922 | 0.4930 | 0.8691 |
| 0.2915 | 59.0 | 18231 | 0.4867 | 0.8655 |
| 0.2784 | 60.0 | 18540 | 0.4832 | 0.8679 |
| 0.2784 | 61.0 | 18849 | 0.4785 | 0.8670 |
| 0.2597 | 62.0 | 19158 | 0.4753 | 0.8685 |
| 0.2597 | 63.0 | 19467 | 0.4701 | 0.8712 |
| 0.2488 | 64.0 | 19776 | 0.4766 | 0.8697 |
| 0.2426 | 65.0 | 20085 | 0.4726 | 0.8700 |
| 0.2426 | 66.0 | 20394 | 0.4670 | 0.8694 |
| 0.2261 | 67.0 | 20703 | 0.4624 | 0.8722 |
| 0.2252 | 68.0 | 21012 | 0.4631 | 0.8718 |
| 0.2252 | 69.0 | 21321 | 0.4702 | 0.8670 |
| 0.2116 | 70.0 | 21630 | 0.4629 | 0.8715 |
| 0.2116 | 71.0 | 21939 | 0.4650 | 0.8685 |
| 0.2032 | 72.0 | 22248 | 0.4670 | 0.8673 |
| 0.2035 | 73.0 | 22557 | 0.4565 | 0.8670 |
| 0.2035 | 74.0 | 22866 | 0.4550 | 0.8697 |
| 0.19 | 75.0 | 23175 | 0.4544 | 0.8706 |
| 0.19 | 76.0 | 23484 | 0.4483 | 0.8670 |
| 0.1833 | 77.0 | 23793 | 0.4650 | 0.8694 |
| 0.184 | 78.0 | 24102 | 0.4604 | 0.8709 |
| 0.184 | 79.0 | 24411 | 0.4484 | 0.8697 |
| 0.1728 | 80.0 | 24720 | 0.4469 | 0.8758 |
| 0.1688 | 81.0 | 25029 | 0.4536 | 0.8676 |
| 0.1688 | 82.0 | 25338 | 0.4450 | 0.8709 |
| 0.1674 | 83.0 | 25647 | 0.4530 | 0.8691 |
| 0.1674 | 84.0 | 25956 | 0.4532 | 0.8725 |
| 0.1632 | 85.0 | 26265 | 0.4495 | 0.8718 |
| 0.1605 | 86.0 | 26574 | 0.4440 | 0.8673 |
| 0.1605 | 87.0 | 26883 | 0.4504 | 0.8731 |
| 0.1586 | 88.0 | 27192 | 0.4551 | 0.8667 |
| 0.1558 | 89.0 | 27501 | 0.4498 | 0.8670 |
| 0.1558 | 90.0 | 27810 | 0.4516 | 0.8718 |
| 0.1587 | 91.0 | 28119 | 0.4450 | 0.8725 |
| 0.1587 | 92.0 | 28428 | 0.4435 | 0.8706 |
| 0.1505 | 93.0 | 28737 | 0.4459 | 0.8722 |
| 0.1492 | 94.0 | 29046 | 0.4578 | 0.8673 |
| 0.1492 | 95.0 | 29355 | 0.4499 | 0.8725 |
| 0.1459 | 96.0 | 29664 | 0.4494 | 0.8703 |
| 0.1459 | 97.0 | 29973 | 0.4533 | 0.8697 |
| 0.1481 | 98.0 | 30282 | 0.4524 | 0.8652 |
| 0.1477 | 99.0 | 30591 | 0.4496 | 0.8715 |
| 0.1477 | 100.0 | 30900 | 0.4523 | 0.8661 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.3.0
- Datasets 2.15.0
- Tokenizers 0.15.1
|
mradermacher/Hermes-2-Theta-Llama-3-70B-GGUF | mradermacher | 2024-06-21T14:35:33Z | 45 | 0 | transformers | [
"transformers",
"gguf",
"distillation",
"synthetic data",
"function calling",
"structured outputs",
"json mode",
"en",
"base_model:NousResearch/Hermes-2-Theta-Llama-3-70B",
"base_model:quantized:NousResearch/Hermes-2-Theta-Llama-3-70B",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-21T04:28:26Z | ---
base_model: NousResearch/Hermes-2-Theta-Llama-3-70B
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- distillation
- synthetic data
- function calling
- structured outputs
- json mode
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-70B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-70B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-70B-GGUF/resolve/main/Hermes-2-Theta-Llama-3-70B.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-70B-GGUF/resolve/main/Hermes-2-Theta-Llama-3-70B.IQ3_XS.gguf) | IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-70B-GGUF/resolve/main/Hermes-2-Theta-Llama-3-70B.IQ3_S.gguf) | IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-70B-GGUF/resolve/main/Hermes-2-Theta-Llama-3-70B.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-70B-GGUF/resolve/main/Hermes-2-Theta-Llama-3-70B.IQ3_M.gguf) | IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-70B-GGUF/resolve/main/Hermes-2-Theta-Llama-3-70B.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-70B-GGUF/resolve/main/Hermes-2-Theta-Llama-3-70B.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-70B-GGUF/resolve/main/Hermes-2-Theta-Llama-3-70B.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-70B-GGUF/resolve/main/Hermes-2-Theta-Llama-3-70B.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-70B-GGUF/resolve/main/Hermes-2-Theta-Llama-3-70B.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-70B-GGUF/resolve/main/Hermes-2-Theta-Llama-3-70B.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-70B-GGUF/resolve/main/Hermes-2-Theta-Llama-3-70B.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-70B-GGUF/resolve/main/Hermes-2-Theta-Llama-3-70B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-70B-GGUF/resolve/main/Hermes-2-Theta-Llama-3-70B.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-70B-GGUF/resolve/main/Hermes-2-Theta-Llama-3-70B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-70B-GGUF/resolve/main/Hermes-2-Theta-Llama-3-70B.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
excalibur12/wav2vec2-large-lv60_scoring-phoneme-asr-timit_l2-english_nia037-saq-20s-11k_001 | excalibur12 | 2024-06-21T14:35:09Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"generated_from_trainer",
"base_model:facebook/wav2vec2-large-lv60",
"base_model:finetune:facebook/wav2vec2-large-lv60",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-21T02:05:30Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-large-lv60
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-lv60_scoring-phoneme-asr-timit_l2-english_nia037-saq-20s-11k_001
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-lv60_scoring-phoneme-asr-timit_l2-english_nia037-saq-20s-11k_001
This model is a fine-tuned version of [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.5816
- eval_per: 0.2167
- eval_pcc: 0.7021
- eval_ctc_loss: 0.6796
- eval_mse_loss: 0.9020
- eval_runtime: 699.4014
- eval_samples_per_second: 2.133
- eval_steps_per_second: 2.133
- epoch: 8.09
- step: 6000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 750
- training_steps: 7500
### Framework versions
- Transformers 4.38.1
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.2
|
hcy5561/xlm-roberta-base-finetuned-panx-de | hcy5561 | 2024-06-21T14:30:42Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-02-22T21:02:01Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1378
- F1: 0.8649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2537 | 1.0 | 525 | 0.1586 | 0.8108 |
| 0.1271 | 2.0 | 1050 | 0.1380 | 0.8551 |
| 0.078 | 3.0 | 1575 | 0.1378 | 0.8649 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu118
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Niki548/prot_bert-fine-tuned-toxicity_2.1 | Niki548 | 2024-06-21T14:30:01Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:Rostlab/prot_bert",
"base_model:finetune:Rostlab/prot_bert",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-21T14:16:57Z | ---
base_model: Rostlab/prot_bert
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: prot_bert-fine-tuned-toxicity_2.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# prot_bert-fine-tuned-toxicity_2.1
This model is a fine-tuned version of [Rostlab/prot_bert](https://huggingface.co/Rostlab/prot_bert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6960
- Accuracy: 0.5484
- Precision: 0.3007
- Recall: 0.5484
- F1: 0.3884
All params of Protbert were freezed.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.6963 | 1.0 | 16 | 0.7071 | 0.5484 | 0.3007 | 0.5484 | 0.3884 |
| 0.7041 | 2.0 | 32 | 0.7012 | 0.5484 | 0.3007 | 0.5484 | 0.3884 |
| 0.6946 | 3.0 | 48 | 0.7030 | 0.5484 | 0.3007 | 0.5484 | 0.3884 |
| 0.692 | 4.0 | 64 | 0.6939 | 0.5484 | 0.3007 | 0.5484 | 0.3884 |
| 0.6951 | 5.0 | 80 | 0.6929 | 0.4516 | 0.2040 | 0.4516 | 0.2810 |
| 0.6939 | 6.0 | 96 | 0.6969 | 0.5484 | 0.3007 | 0.5484 | 0.3884 |
| 0.6927 | 7.0 | 112 | 0.6944 | 0.5484 | 0.3007 | 0.5484 | 0.3884 |
| 0.6911 | 8.0 | 128 | 0.6960 | 0.5484 | 0.3007 | 0.5484 | 0.3884 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
mradermacher/Shark-1-Ogno-9b-passthrough-GGUF | mradermacher | 2024-06-21T14:17:02Z | 11 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"powermove72/Shark-1",
"eren23/OGNO-7b-dpo-truthful",
"en",
"base_model:powermove72/Shark-1-Ogno-9b-passthrough",
"base_model:quantized:powermove72/Shark-1-Ogno-9b-passthrough",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-21T13:43:53Z | ---
base_model: powermove72/Shark-1-Ogno-9b-passthrough
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- powermove72/Shark-1
- eren23/OGNO-7b-dpo-truthful
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/powermove72/Shark-1-Ogno-9b-passthrough
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-9b-passthrough-GGUF/resolve/main/Shark-1-Ogno-9b-passthrough.Q2_K.gguf) | Q2_K | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-9b-passthrough-GGUF/resolve/main/Shark-1-Ogno-9b-passthrough.IQ3_XS.gguf) | IQ3_XS | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-9b-passthrough-GGUF/resolve/main/Shark-1-Ogno-9b-passthrough.Q3_K_S.gguf) | Q3_K_S | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-9b-passthrough-GGUF/resolve/main/Shark-1-Ogno-9b-passthrough.IQ3_S.gguf) | IQ3_S | 4.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-9b-passthrough-GGUF/resolve/main/Shark-1-Ogno-9b-passthrough.IQ3_M.gguf) | IQ3_M | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-9b-passthrough-GGUF/resolve/main/Shark-1-Ogno-9b-passthrough.Q3_K_M.gguf) | Q3_K_M | 4.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-9b-passthrough-GGUF/resolve/main/Shark-1-Ogno-9b-passthrough.Q3_K_L.gguf) | Q3_K_L | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-9b-passthrough-GGUF/resolve/main/Shark-1-Ogno-9b-passthrough.IQ4_XS.gguf) | IQ4_XS | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-9b-passthrough-GGUF/resolve/main/Shark-1-Ogno-9b-passthrough.Q4_K_S.gguf) | Q4_K_S | 5.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-9b-passthrough-GGUF/resolve/main/Shark-1-Ogno-9b-passthrough.Q4_K_M.gguf) | Q4_K_M | 5.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-9b-passthrough-GGUF/resolve/main/Shark-1-Ogno-9b-passthrough.Q5_K_S.gguf) | Q5_K_S | 6.3 | |
| [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-9b-passthrough-GGUF/resolve/main/Shark-1-Ogno-9b-passthrough.Q5_K_M.gguf) | Q5_K_M | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-9b-passthrough-GGUF/resolve/main/Shark-1-Ogno-9b-passthrough.Q6_K.gguf) | Q6_K | 7.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-9b-passthrough-GGUF/resolve/main/Shark-1-Ogno-9b-passthrough.Q8_0.gguf) | Q8_0 | 9.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-9b-passthrough-GGUF/resolve/main/Shark-1-Ogno-9b-passthrough.f16.gguf) | f16 | 18.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
kmaksatk/experiments | kmaksatk | 2024-06-21T14:02:29Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"diffusers-training",
"base_model:SG161222/Realistic_Vision_V6.0_B1_noVAE",
"base_model:adapter:SG161222/Realistic_Vision_V6.0_B1_noVAE",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-06-17T13:36:17Z | ---
base_model: SG161222/Realistic_Vision_V6.0_B1_noVAE
library_name: diffusers
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
- diffusers-training
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# controlnet-kmaksatk/experiments
These are controlnet weights trained on SG161222/Realistic_Vision_V6.0_B1_noVAE with new type of conditioning.
You can find some example images below.
prompt: human

prompt: human

## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
prithivMLmods/Canes-Cars-Model-LoRA | prithivMLmods | 2024-06-21T14:01:37Z | 89 | 9 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"StableDiffusionXLPipeline",
"Inference Endpoints",
"sdxl",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-06-21T03:54:24Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- StableDiffusionXLPipeline
- Inference Endpoints
- sdxl
widget:
- text: >-
A black Ford Mustang was photographed on the side of an abandoned road in
British Columbia, Canada surrounded by trees and grass. The car is driving
fast down the dirt road at sunset. Light beams shine from its headlights.
Shot in the style of Peter Lindbergh for Aman Resorts --ar 85:128 --v 6.0
--style raw
output:
url: images/c1.png
- text: >-
A cars headlights illuminate part of its body. This scene creates a tranquil
atmosphere with soft lighting and warm tones, in the style of minimalism.
--ar 85:128 --v 6.0 --style raw
output:
url: images/c2.png
- text: >-
An vector illustration of an old car on the road, in a retro poster style
with orange and yellow colors, featuring vintage cars from past years, set
against a backdrop of mountains, trees, clouds, and sunset. The scene
captures a sense of adventure as if moving through time in a classic
automotive journey. Its an appealing design that would be perfect for
travel-themed projects or packaging materials related to vintage vehicles,
in the style of retro posters. --ar 128:89 --v 6.0 --style raw
output:
url: images/c3.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Cars, Concept Cars
license: creativeml-openrail-m
---
# Car Model
<Gallery />
## Model description
Cars-Model / Concept Cars
Image Processing Parameters
| Parameter | Value | Parameter | Value |
|---------------------------|--------|---------------------------|--------|
| LR Scheduler | constant | Noise Offset | 0.03 |
| Optimizer | AdamW | Multires Noise Discount | 0.1 |
| Network Dim | 64 | Multires Noise Iterations | 10 |
| Network Alpha | 32 | Repeat | 20 |
| Epoch | 20 | Save Every N Epochs | 1 |
## SETTING-UP
```py
pipe = StableDiffusionXLPipeline.from_pretrained(
"-------------xxxxxxxxx----------",
torch_dtype=torch.float16,
use_safetensors=True,
)
(or)
-----------------------------------------------------------
pipe = StableDiffusionXLPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
torch_dtype=torch.float16,
use_safetensors=True,
)
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
pipe.load_lora_weights("prithivMLmods/Canes-Cars-Model-LoRA", weight_name="Canes-Cars-Model-LoRA.safetensors", adapter_name="car")
pipe.set_adapters("car")
pipe.to("cuda")
```
## Trigger prompts
A black Ford Mustang was photographed on the side of an abandoned road in British Columbia, Canada surrounded by trees and grass. The car is driving fast down the dirt road at sunset. Light beams shine from its headlights. Shot in the style of Peter Lindbergh for Aman Resorts --ar 85:128 --v 6.0 --style raw
A car's headlights illuminate part of its body. This scene creates a tranquil atmosphere with soft lighting and warm tones, in the style of minimalism. --ar 85:128 --v 6.0 --style raw
An vector illustration of an old car on the road, in a retro poster style with orange and yellow colors, featuring vintage cars from past years, set against a backdrop of mountains, trees, clouds, and sunset. The scene captures a sense of adventure as if moving through time in a classic automotive journey. It's an appealing design that would be perfect for travel-themed projects or packaging materials related to vintage vehicles, in the style of retro posters. --ar 128:89 --v 6.0 --style raw
| Parameter | Value |
|-----------------|---------------------------------------------------------------------------------------|
| Prompt | A car's headlights illuminate part of its body. This scene creates a tranquil atmosphere with soft lighting and warm tones, in the style of minimalism. --ar 85:128 --v 6.0 --style raw |
| Sampler | euler |
## Trigger words
You should use `Cars` to trigger the image generation.
You should use `Concept Cars` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/prithivMLmods/Canes-Cars-Model-LoRA/tree/main) them in the Files & versions tab. |
bullerwins/Hermes-2-Theta-Llama-3-70B-exl2_4.0bpw | bullerwins | 2024-06-21T13:58:53Z | 7 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"distillation",
"synthetic data",
"function calling",
"structured outputs",
"json mode",
"conversational",
"en",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"exl2",
"region:us"
] | text-generation | 2024-06-21T13:45:27Z | ---
license: llama3
language:
- en
pipeline_tag: text-generation
tags:
- distillation
- synthetic data
- function calling
- structured outputs
- json mode
---
Quantized to exl2 using [Exllamav2 0.1.5](https://github.com/turboderp/exllamav2)
Original model [NousResearch/Hermes-2-Theta-Llama-3-70B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-70B)
# Hermes 2 Theta Llama-3 70B Model Card

## Model Description
Hermes-2 Θ (Theta) 70B is the continuation of our experimental merged model released by [Nous Research](https://nousresearch.com/), in collaboration with Charles Goddard and [Arcee AI](https://www.arcee.ai/), the team behind [MergeKit](https://github.com/arcee-ai/mergekit).
Hermes-2 Θ is a merged and then further RLHF'ed version our excellent Hermes 2 Pro model and Meta's Llama-3 Instruct model to form a new model, Hermes-2 Θ, combining the best of both worlds of each model.
# Comparison to Llama-3 70B Instruct Model:

# Example Outputs
## System Prompt: Roleplay as an Anime Catgirl who's good at programming and a hacker.

## Providing Structured Outputs for Annotating LLM Training Data

## System Prompt: Roleplay as a bombastic alchemist from the 17th century in France. You are on a journey with the user to find the philosopher's stone.

# Prompt Format & Capabilities
Hermes 2 Θ uses ChatML as the prompt format, opening up a very structured and steerable, multiturn system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where uniquely added tokens were added to denote the beginning and end of any turn, along with roles for the turns.
The model is also specifically trained for Function Calling, Structured Outputs with JSON, and Feature Extraction from RAG Documents (see below).
## Conversational Chats
Here is an example prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are a helpful, intelligent assistant AI named "Hermes", a conversational chatbot that can follow instructions, converse with the user, and perform a variety of tasks, including tasks on knowledge, reasoning, mathematics, and code. Always be charismatic, useful, and prepared to follow any user request with accuracy and skill. You should respond with high quality, fluent, and detailed responses. Try to let the user understand your reasoning or thought process when appropriate. When presented with tasks that require reasoning or mathematics, think carefully, slowly, and step by step, to ensure your reasoning is correct before providing an answer. Utilize the "Examples" section to assist you in performing the task. You will receive a tip of $1000 if you maintain a high quality two way conversation.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 2."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
model.generate(**gen_input)
```
## Function Calling Format
Our model was trained on specific system prompts and structures for Function Calling. While the system prompt looks complicated, we have created a GitHub repo containing code to easily build these based on real python functions.
You should use the system role with this message, followed by a function signature json as this example shows here.
```
<|im_start|>system
You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools:
<tools>
{"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}}
</tools>
Use the following pydantic model json schema for each tool call you will make:
{"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"}
For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{"arguments": <args-dict>, "name": <function-name>}
</tool_call><|im_end|>
<|im_start|>user
Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|>
```
The model will then generate a tool call, which your inference code must parse, and plug into a function.
See example inference code here: https://github.com/NousResearch/Hermes-Function-Calling):
```
<|im_start|>assistant
<tool_call>
{"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"}
</tool_call><|im_end|>
```
Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so:
```
<|im_start|>tool
<tool_response>
{"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}}
</tool_response>
<|im_end|>
```
The assistant will then read in that data from the function's response, and generate a natural language response:
```
<|im_start|>assistant
The stock fundamentals data for Tesla (TSLA) are as follows:
- **Symbol**: TSLA
- **Company Name**: Tesla, Inc.
- **Sector**: Consumer Cyclical
- **Industry**: Auto Manufacturers
- **Market Capitalization**: $566,160,130,480
- **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73
- **Price-to-Book Ratio (PB Ratio)**: 9.04
- **Dividend Yield**: N/A
- **Trailing Earnings Per Share (EPS)**: $4.3
- **Beta Value of the Stock**: 2.42
- **52-Week High Price of the Stock**: $299.29
- **52-Week Low Price of the Stock**: $152.37
This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|>
```
## Prompt Format for JSON Mode / Structured Outputs
Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema.
Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main
```
<|im_start|>system
You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:
<schema>
{schema}
</schema><|im_end|>
```
Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.
# Benchmark Details
## GPT4All:
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.6638|_ |0.0138|
| | |acc_norm|0.6903|_ |0.0135|
|arc_easy | 0|acc |0.8851|_ |0.0065|
| | |acc_norm|0.8712|_ |0.0069|
|boolq | 1|acc |0.8820|_ |0.0056|
|hellaswag | 0|acc |0.6579|_ |0.0047|
| | |acc_norm|0.8432|_ |0.0036|
|openbookqa | 0|acc |0.3920|_ |0.0219|
| | |acc_norm|0.4740|_ |0.0224|
|piqa | 0|acc |0.8286|_ |0.0088|
| | |acc_norm|0.8351|_ |0.0087|
|winogrande | 0|acc |0.7893|_ |0.0115|
```
Average: 76.93
## AGIEval:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat | 0|acc |0.4055|_ |0.0309|
| | |acc_norm|0.4094|_ |0.0309|
|agieval_logiqa_en | 0|acc |0.5100|_ |0.0196|
| | |acc_norm|0.5023|_ |0.0196|
|agieval_lsat_ar | 0|acc |0.2783|_ |0.0296|
| | |acc_norm|0.2957|_ |0.0302|
|agieval_lsat_lr | 0|acc |0.7451|_ |0.0193|
| | |acc_norm|0.7333|_ |0.0196|
|agieval_lsat_rc | 0|acc |0.8290|_ |0.0230|
| | |acc_norm|0.8104|_ |0.0239|
|agieval_sat_en | 0|acc |0.9029|_ |0.0207|
| | |acc_norm|0.9029|_ |0.0207|
|agieval_sat_en_without_passage| 0|acc |0.5825|_ |0.0344|
| | |acc_norm|0.5631|_ |0.0346|
|agieval_sat_math | 0|acc |0.6318|_ |0.0326|
| | |acc_norm|0.6227|_ |0.0328|
```
Average: 60.50
## BigBench:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.6737|_ |0.0341|
|bigbench_date_understanding | 0|multiple_choice_grade|0.7724|_ |0.0219|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3256|_ |0.0292|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.4763|_ |0.0264|
| | |exact_str_match |0.0000|_ |0.0000|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.4720|_ |0.0223|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.3486|_ |0.0180|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.6367|_ |0.0278|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.5220|_ |0.0224|
|bigbench_navigate | 0|multiple_choice_grade|0.5930|_ |0.0155|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.8600|_ |0.0078|
|bigbench_ruin_names | 0|multiple_choice_grade|0.7411|_ |0.0207|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.5281|_ |0.0158|
|bigbench_snarks | 0|multiple_choice_grade|0.6961|_ |0.0343|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.5751|_ |0.0158|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.9880|_ |0.0034|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2296|_ |0.0119|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1691|_ |0.0090|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.6367|_ |0.0278|
```
Average: 56.91
## TruthfulQA:
```| Task |Version|Metric|Value | |Stderr|
|-------------|------:|------|-----:|---|-----:|
|truthfulqa_mc| 1|mc1 |0.4565|_ |0.0174|
| | |mc2 |0.6288|_ |0.0151|
```
62.88
## IFEval:
**87.99**
## MTBench:
First Turn - **9.1625**
Second Turn - **8.925**
Average - **9.04375**
# Inference Code
Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM)
```python
# Code to inference Hermes with HF Transformers
# Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM
import bitsandbytes, flash_attn
tokenizer = AutoTokenizer.from_pretrained('NousResearch/Hermes-2-Theta-Llama-3-70B', trust_remote_code=True)
model = LlamaForCausalLM.from_pretrained(
"NousResearch/Hermes-2-Theta-Llama-3-70B",
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=True,
use_flash_attention_2=True
)
prompts = [
"""<|im_start|>system
You are a helpful, intelligent assistant AI named "Hermes", a conversational chatbot that can follow instructions, converse with the user, and perform a variety of tasks, including tasks on knowledge, reasoning, mathematics, and code. Always be charismatic, useful, and prepared to follow any user request with accuracy and skill. You should respond with high quality, fluent, and detailed responses. Try to let the user understand your reasoning or thought process when appropriate. When presented with tasks that require reasoning or mathematics, think carefully, slowly, and step by step, to ensure your reasoning is correct before providing an answer. Utilize the "Examples" section to assist you in performing the task. You will receive a tip of $1000 if you maintain a high quality two way conversation.<|im_end|>
<|im_start|>user
Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|>
<|im_start|>assistant""",
]
for chat in prompts:
print(chat)
input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
```
## Inference Code for Function Calling:
All code for utilizing, parsing, and building function calling templates is available on our github:
[https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling)

# Chat Interfaces
When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
In LM-Studio, simply select the ChatML Prefix on the settings side pane:

## Quantized Versions:
GGUF Versions Available Here: https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-70B-GGUF
# How to cite:
```bibtext
@misc{Hermes-2-Theta-Llama-3-70B,
url={[https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-70B][NousResearch/Hermes-2-Theta-Llama-3-70B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-70B))},
title={Hermes-2-Theta-Llama-3-70B},
author={"Teknium", Charles Goddard, "interstellarninja", "theemozilla", "karan4d", "huemin_art"}
}
```
|
morturr/flan-t5-base-amazon-text-classification-21-6-test | morturr | 2024-06-21T13:57:50Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text-classification",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-21T11:49:33Z | ---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
model-index:
- name: flan-t5-base-amazon-text-classification-21-6-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-amazon-text-classification-21-6-test
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.2
- Pytorch 2.3.1+cu121
- Datasets 2.10.1
- Tokenizers 0.15.2
|
AliAbdelrasheed/maqa_llama_4bit_GGUF | AliAbdelrasheed | 2024-06-21T13:57:45Z | 38 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:AliAbdelrasheed/maqa_llama",
"base_model:quantized:AliAbdelrasheed/maqa_llama",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-21T13:44:12Z | ---
base_model: AliAbdelrasheed/maqa_llama
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** AliAbdelrasheed
- **License:** apache-2.0
- **Finetuned from model :** AliAbdelrasheed/maqa_llama
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
SidXXD/eps_50 | SidXXD | 2024-06-21T13:56:29Z | 4 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"custom-diffusion",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-06-21T11:19:23Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
instance_prompt: photo of a <v1*> cat
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- custom-diffusion
inference: true
---
# Custom Diffusion - SidXXD/eps_50
These are Custom Diffusion adaption weights for stabilityai/stable-diffusion-2-1-base. The weights were trained on photo of a <v1*> cat using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following.
For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
|
nagrajn/TinyLinuxDSLM50M | nagrajn | 2024-06-21T13:55:15Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-21T13:54:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hmone231/burmese_gpt2 | hmone231 | 2024-06-21T13:50:42Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-21T13:50:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
angelosps/delta_m | angelosps | 2024-06-21T13:50:25Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"textual-entailment",
"logical-reasoning",
"deberta",
"en",
"arxiv:2311.08941",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-21T10:27:22Z | ---
license: mit
tags:
- textual-entailment
- logical-reasoning
- deberta
language:
- en
metrics:
- accuracy
pipeline_tag: text-classification
---
# DELTA: Description Logics with Transformers
Fine-tuning a transformer model for textual entailment over expressive contexts generated from description logic knowledge bases.
Specifically, the model is given a context (a set of facts and rules) and a question.
The model should answer with "True" if the question is logically implied from the context, "False" if it contradicts the context, and "Unknown" if none of the two.
For more info please see our paper.
## Model Details
### Model Description
DELTA<sub>M</sub> is a DeBERTaV3 large model fine-tuned on the DELTA<sub>D</sub> dataset.
- **License:** MIT
- **Finetuned from model:** `microsoft/deberta-v3-large`
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/angelosps/DELTA
- **Paper:** [Transformers in the Service of Description Logic-based Contexts](https://arxiv.org/abs/2311.08941)
<!-- ## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
<!-- ### Direct Use -->
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
<!-- [More Information Needed] -->
<!-- ### Downstream Use [optional] -->
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
<!-- [More Information Needed] -->
<!-- ## Training Details
### Training Data -->
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
<!-- [More Information Needed]
### Training Procedure -->
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
<!-- #### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
<!-- #### Speeds, Sizes, Times [optional] -->
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
<!-- [More Information Needed]
## Evaluation -->
<!-- This section describes the evaluation protocols and provides the results. -->
<!-- ### Testing Data, Factors & Metrics
#### Testing Data
-->
<!-- This should link to a Dataset Card if possible. -->
<!-- [More Information Needed]
#### Metrics -->
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
<!-- [More Information Needed]
### Results
[More Information Needed] -->
<!-- #### Summary -->
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```
@misc{poulis2024transformers,
title={Transformers in the Service of Description Logic-based Contexts},
author={Angelos Poulis and Eleni Tsalapati and Manolis Koubarakis},
year={2024},
eprint={2311.08941},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!-- ## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] --> |
DBangshu/Base_gemma_e5_8_1 | DBangshu | 2024-06-21T13:47:49Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-21T13:45:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ar9av/moondream_ft_charts_train_test_val | ar9av | 2024-06-21T13:41:37Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"moondream1",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-06-21T11:35:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
basakdemirok/bert-base-multilingual-cased-news_cat_v0 | basakdemirok | 2024-06-21T13:39:18Z | 4 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-21T11:35:01Z | ---
license: apache-2.0
base_model: google-bert/bert-base-multilingual-cased
tags:
- generated_from_keras_callback
model-index:
- name: basakdemirok/bert-base-multilingual-cased-news_cat_v0
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# basakdemirok/bert-base-multilingual-cased-news_cat_v0
This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2231
- Validation Loss: 0.3465
- Train F1: 0.9195
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 748, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train F1 | Epoch |
|:----------:|:---------------:|:--------:|:-----:|
| 0.9217 | 0.4899 | 0.8713 | 0 |
| 0.3574 | 0.4040 | 0.8741 | 1 |
| 0.2231 | 0.3465 | 0.9195 | 2 |
### Framework versions
- Transformers 4.41.2
- TensorFlow 2.13.1
- Datasets 2.4.0
- Tokenizers 0.19.1
|
Cyanex/D.r.e.a.m_Mega | Cyanex | 2024-06-21T13:27:24Z | 12 | 3 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"art",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-02-11T22:14:05Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
- art
language:
- en
---
# D.r.e.a.m (Digital Rendering Engine for Artistic Melodies)
## Welcome to D.r.e.a.m (Digital Rendering Engine for Artistic Melodies).
The model is currently in its training phase. This is not the final version and may contain artifacts, potentially performing poorly in some cases. The goal of this model is to create images similar to those produced by Midjourney. It is being trained using the Midjourney Normalized Dataset available on Kaggle.
## Model Details
- **Developed by:** Cyanex1702
- **Model Type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **Dataset:** [DreamScape](http://https://www.kaggle.com/datasets/cyanex1702/midjouney-normalized-dataset "DreamScape")
- **Training Status:** In Progress
## Model Description
D.r.e.a.m is a model designed to generate and modify images based on text prompts. The model leverages advanced diffusion techniques to create high-quality, artistic renderings from textual descriptions, aiming to emulate the style and creativity of Midjourney.
## Samples










## Features
- **Text-to-Image Generation:** Generate images from descriptive text prompts.
- **Image Modification:** Modify existing images based on new text inputs.
- **Creative Rendering:** Produce artistic and imaginative images.
## Usage
To use the model, you can input text prompts in English. The model will process these prompts and generate corresponding images. Note that due to the model's current training phase, the results may vary and contain imperfections.
## Contributing
We welcome contributions from the community! If you'd like to contribute. |
sidvash/famus_multi_instance_Phi-3-medium-4k-instruct-2-epochs-merged_16bit | sidvash | 2024-06-21T13:26:02Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-21T12:48:09Z | ---
base_model: unsloth/phi-3-medium-4k-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
---
# Uploaded model
- **Developed by:** sidvash
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3-medium-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Cyanex/D.R.E.A.M | Cyanex | 2024-06-21T13:21:04Z | 65 | 1 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"art",
"stable-diffusion-diffusers",
"en",
"doi:10.57967/hf/2592",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-08-20T08:55:53Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
- art
- stable-diffusion-diffusers
- text-to-image
- diffusers
language:
- en
pipeline_tag: text-to-image
library_name: diffusers
---
# D.r.e.a.m (Digital Rendering Engine for Artistic Melodies)
## Welcome to D.r.e.a.m (Digital Rendering Engine for Artistic Melodies).
The D.R.E.A.M. model suite is designed for various image generation purposes, featuring specialized models for different styles. The general-purpose image generation model, simply referred to as the D.R.E.A.M. model, caters to a wide range of image creation needs. It is trained on the Cyberverse dataset, ensuring it has a robust understanding of diverse visual contexts and can produce versatile outputs.
For generating anime-related images, the suite includes the Dream-Anime model. This model is fine-tuned specifically to create high-quality, detailed anime artwork. By leveraging the Surreal Symphonies dataset available on Kaggle, Dream-Anime excels in capturing the distinct stylistic elements and aesthetics of anime, making it an ideal tool for anime enthusiasts and creators.
Additionally, the Dream-Photorealism model is tailored to generate photorealistic images. Also trained on the Surreal Symphonies dataset, this model focuses on producing images with lifelike quality and precision. Its training enables it to render images that closely mimic real-world visuals, making it suitable for applications requiring high degrees of realism.
Each model within the D.R.E.A.M. suite is optimized for its specific purpose, ensuring high performance and quality in its respective domain of image generation.
## Model Details
- **Developed by:** Cyanex1702
- **Model Type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **Training Status:** In Progress
## Model Description
D.r.e.a.m is a model designed to generate and modify images based on text prompts. The model leverages advanced diffusion techniques to create high-quality, artistic renderings from textual descriptions, aiming to emulate the style and creativity of Midjourney.
## Features
- **Text-to-Image Generation:** Generate images from descriptive text prompts.
- **Image Modification:** Modify existing images based on new text inputs.
- **Creative Rendering:** Produce artistic and imaginative images.
## Usage
To use the model, you can input text prompts in English. The model will process these prompts and generate corresponding images. Note that due to the model's current training phase, the results may vary and contain imperfections.
## Contributing
We welcome contributions from the community! If you'd like to contribute. |
scaaseu/distilbert-emotion | scaaseu | 2024-06-21T13:18:04Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-21T13:14:08Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: distilbert-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9335
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1518
- Accuracy: 0.9335
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 250 | 0.1969 | 0.928 |
| 0.3408 | 2.0 | 500 | 0.1518 | 0.9335 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
hon9kon9ize/bert-large-cantonese | hon9kon9ize | 2024-06-21T13:10:19Z | 312 | 7 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"yue",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-07T13:28:10Z | ---
library_name: transformers
language:
- yue
license: cc-by-4.0
tags:
- generated_from_trainer
pipeline_tag: fill-mask
widget:
- text: 香港原本[MASK]一個人煙稀少嘅漁港。
example_title: 係
model-index:
- name: bert-large-cantonese
results: []
---
# bert-large-cantonese
## Description
This model is tranied from scratch on Cantonese text. It is a BERT model with a large architecture (24-layer, 1024-hidden, 16-heads, 326M parameters).
The first training stage is to pre-train the model on 128 length sequences with a batch size of 512 for 1 epoch. the second stage is to continued pre-train the model on 512 length sequences with a batch size of 512 for one more epoch.
## How to use
You can use this model directly with a pipeline for masked language modeling:
```python
from transformers import pipeline
mask_filler = pipeline(
"fill-mask",
model="hon9kon9ize/bert-large-cantonese"
)
mask_filler("雞蛋六隻,糖呢就兩茶匙,仲有[MASK]橙皮添。")
; [{'score': 0.08160534501075745,
; 'token': 943,
; 'token_str': '個',
; 'sequence': '雞 蛋 六 隻 , 糖 呢 就 兩 茶 匙 , 仲 有 個 橙 皮 添 。'},
; {'score': 0.06182105466723442,
; 'token': 1576,
; 'token_str': '啲',
; 'sequence': '雞 蛋 六 隻 , 糖 呢 就 兩 茶 匙 , 仲 有 啲 橙 皮 添 。'},
; {'score': 0.04600336775183678,
; 'token': 1646,
; 'token_str': '嘅',
; 'sequence': '雞 蛋 六 隻 , 糖 呢 就 兩 茶 匙 , 仲 有 嘅 橙 皮 添 。'},
; {'score': 0.03743772581219673,
; 'token': 3581,
; 'token_str': '橙',
; 'sequence': '雞 蛋 六 隻 , 糖 呢 就 兩 茶 匙 , 仲 有 橙 橙 皮 添 。'},
; {'score': 0.031560592353343964,
; 'token': 5148,
; 'token_str': '紅',
; 'sequence': '雞 蛋 六 隻 , 糖 呢 就 兩 茶 匙 , 仲 有 紅 橙 皮 添 。'}]
```
## Training hyperparameters
The following hyperparameters were used during first training:
- Batch size: 512
- Learning rate: 1e-4
- Learning rate scheduler: linear decay
- 1 Epoch
- Warmup ratio: 0.1
Loss plot on [WanDB](https://api.wandb.ai/links/indiejoseph/v3ljlpmp)
The following hyperparameters were used during second training:
- Batch size: 512
- Learning rate: 5e-5
- Learning rate scheduler: linear decay
- 1 Epoch
- Warmup ratio: 0.1
Loss plot on [WanDB](https://api.wandb.ai/links/indiejoseph/vcm3q1ef)
|
ChaiML/gpt2_xl_pairwise_89m_step_347634 | ChaiML | 2024-06-21T13:06:55Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-21T13:04:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Jules2809/codellama_f_gguf | Jules2809 | 2024-06-21T13:04:51Z | 7 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/codellama-7b-bnb-4bit",
"base_model:quantized:unsloth/codellama-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-21T12:55:33Z | ---
base_model: unsloth/codellama-7b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** Jules2809
- **License:** apache-2.0
- **Finetuned from model :** unsloth/codellama-7b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
KaranNag/Ai_text_model | KaranNag | 2024-06-21T12:57:49Z | 19 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-21T12:23:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
alielfilali01/L3H10M-0000 | alielfilali01 | 2024-06-21T12:41:20Z | 2,916 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-17T23:18:24Z | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
alielfilali01/Q2H10M-0000 | alielfilali01 | 2024-06-21T12:37:12Z | 2,890 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-18T12:29:27Z | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
alielfilali01/M03H10M-0000 | alielfilali01 | 2024-06-21T12:37:00Z | 2,898 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-18T15:05:46Z | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ctrlg/hmm_tulu2-7b_writing-prompts_32768 | ctrlg | 2024-06-21T12:32:35Z | 13 | 0 | transformers | [
"transformers",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"endpoints_compatible",
"region:us"
] | null | 2024-06-21T12:28:52Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
Saregama2024/tuneup_demo | Saregama2024 | 2024-06-21T12:31:17Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"musicgen",
"text-to-audio",
"arxiv:2306.05284",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2024-06-21T12:18:22Z | ---
inference: true
tags:
- musicgen
license: cc-by-nc-4.0
pipeline_tag: text-to-audio
widget:
- text: "a funky house with 80s hip hop vibes"
example_title: "Prompt 1"
- text: "a chill song with influences from lofi, chillstep and downtempo"
example_title: "Prompt 2"
- text: "a catchy beat for a podcast intro"
example_title: "Prompt 3"
---
# MusicGen - Small - 300M
MusicGen is a text-to-music model capable of genreating high-quality music samples conditioned on text descriptions or audio prompts.
It is a single stage auto-regressive Transformer model trained over a 32kHz EnCodec tokenizer with 4 codebooks sampled at 50 Hz.
Unlike existing methods, like MusicLM, MusicGen doesn't require a self-supervised semantic representation, and it generates all 4 codebooks in one pass.
By introducing a small delay between the codebooks, we show we can predict them in parallel, thus having only 50 auto-regressive steps per second of audio.
MusicGen was published in [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by *Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez*.
Four checkpoints are released:
- [**small** (this checkpoint)](https://huggingface.co/facebook/musicgen-small)
- [medium](https://huggingface.co/facebook/musicgen-medium)
- [large](https://huggingface.co/facebook/musicgen-large)
- [melody](https://huggingface.co/facebook/musicgen-melody)
## Example
Try out MusicGen yourself!
* Audiocraft Colab:
<a target="_blank" href="https://colab.research.google.com/drive/1fxGqfg96RBUvGxZ1XXN07s3DthrKUl4-?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
* Hugging Face Colab:
<a target="_blank" href="https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/MusicGen.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
* Hugging Face Demo:
<a target="_blank" href="https://huggingface.co/spaces/facebook/MusicGen">
<img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/>
</a>
## 🤗 Transformers Usage
You can run MusicGen locally with the 🤗 Transformers library from version 4.31.0 onwards.
1. First install the 🤗 [Transformers library](https://github.com/huggingface/transformers) and scipy:
```
pip install --upgrade pip
pip install --upgrade transformers scipy
```
2. Run inference via the `Text-to-Audio` (TTA) pipeline. You can infer the MusicGen model via the TTA pipeline in just a few lines of code!
```python
from transformers import pipeline
import scipy
synthesiser = pipeline("text-to-audio", "facebook/musicgen-small")
music = synthesiser("lo-fi music with a soothing melody", forward_params={"do_sample": True})
scipy.io.wavfile.write("musicgen_out.wav", rate=music["sampling_rate"], data=music["audio"])
```
3. Run inference via the Transformers modelling code. You can use the processor + generate code to convert text into a mono 32 kHz audio waveform for more fine-grained control.
```python
from transformers import AutoProcessor, MusicgenForConditionalGeneration
processor = AutoProcessor.from_pretrained("facebook/musicgen-small")
model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small")
inputs = processor(
text=["80s pop track with bassy drums and synth", "90s rock song with loud guitars and heavy drums"],
padding=True,
return_tensors="pt",
)
audio_values = model.generate(**inputs, max_new_tokens=256)
```
3. Listen to the audio samples either in an ipynb notebook:
```python
from IPython.display import Audio
sampling_rate = model.config.audio_encoder.sampling_rate
Audio(audio_values[0].numpy(), rate=sampling_rate)
```
Or save them as a `.wav` file using a third-party library, e.g. `scipy`:
```python
import scipy
sampling_rate = model.config.audio_encoder.sampling_rate
scipy.io.wavfile.write("musicgen_out.wav", rate=sampling_rate, data=audio_values[0, 0].numpy())
```
For more details on using the MusicGen model for inference using the 🤗 Transformers library, refer to the [MusicGen docs](https://huggingface.co/docs/transformers/model_doc/musicgen).
## Audiocraft Usage
You can also run MusicGen locally through the original [Audiocraft library]((https://github.com/facebookresearch/audiocraft):
1. First install the [`audiocraft` library](https://github.com/facebookresearch/audiocraft)
```
pip install git+https://github.com/facebookresearch/audiocraft.git
```
2. Make sure to have [`ffmpeg`](https://ffmpeg.org/download.html) installed:
```
apt-get install ffmpeg
```
3. Run the following Python code:
```py
from audiocraft.models import MusicGen
from audiocraft.data.audio import audio_write
model = MusicGen.get_pretrained("small")
model.set_generation_params(duration=8) # generate 8 seconds.
descriptions = ["happy rock", "energetic EDM"]
wav = model.generate(descriptions) # generates 2 samples.
for idx, one_wav in enumerate(wav):
# Will save under {idx}.wav, with loudness normalization at -14 db LUFS.
audio_write(f'{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness")
```
## Model details
**Organization developing the model:** The FAIR team of Meta AI.
**Model date:** MusicGen was trained between April 2023 and May 2023.
**Model version:** This is the version 1 of the model.
**Model type:** MusicGen consists of an EnCodec model for audio tokenization, an auto-regressive language model based on the transformer architecture for music modeling. The model comes in different sizes: 300M, 1.5B and 3.3B parameters ; and two variants: a model trained for text-to-music generation task and a model trained for melody-guided music generation.
**Paper or resources for more information:** More information can be found in the paper [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284).
**Citation details:**
```
@misc{copet2023simple,
title={Simple and Controllable Music Generation},
author={Jade Copet and Felix Kreuk and Itai Gat and Tal Remez and David Kant and Gabriel Synnaeve and Yossi Adi and Alexandre Défossez},
year={2023},
eprint={2306.05284},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
**License:** Code is released under MIT, model weights are released under CC-BY-NC 4.0.
**Where to send questions or comments about the model:** Questions and comments about MusicGen can be sent via the [Github repository](https://github.com/facebookresearch/audiocraft) of the project, or by opening an issue.
## Intended use
**Primary intended use:** The primary use of MusicGen is research on AI-based music generation, including:
- Research efforts, such as probing and better understanding the limitations of generative models to further improve the state of science
- Generation of music guided by text or melody to understand current abilities of generative AI models by machine learning amateurs
**Primary intended users:** The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models.
**Out-of-scope use cases:** The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate music pieces that create hostile or alienating environments for people. This includes generating music that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
## Metrics
**Models performance measures:** We used the following objective measure to evaluate the model on a standard music benchmark:
- Frechet Audio Distance computed on features extracted from a pre-trained audio classifier (VGGish)
- Kullback-Leibler Divergence on label distributions extracted from a pre-trained audio classifier (PaSST)
- CLAP Score between audio embedding and text embedding extracted from a pre-trained CLAP model
Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes:
- Overall quality of the music samples;
- Text relevance to the provided text input;
- Adherence to the melody for melody-guided music generation.
More details on performance measures and human studies can be found in the paper.
**Decision thresholds:** Not applicable.
## Evaluation datasets
The model was evaluated on the [MusicCaps benchmark](https://www.kaggle.com/datasets/googleai/musiccaps) and on an in-domain held-out evaluation set, with no artist overlap with the training set.
## Training datasets
The model was trained on licensed data using the following sources: the [Meta Music Initiative Sound Collection](https://www.fb.com/sound), [Shutterstock music collection](https://www.shutterstock.com/music) and the [Pond5 music collection](https://www.pond5.com/). See the paper for more details about the training set and corresponding preprocessing.
## Evaluation results
Below are the objective metrics obtained on MusicCaps with the released model. Note that for the publicly released models, we had all the datasets go through a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs), in order to keep only the instrumental part. This explains the difference in objective metrics with the models used in the paper.
| Model | Frechet Audio Distance | KLD | Text Consistency | Chroma Cosine Similarity |
|---|---|---|---|---|
| **facebook/musicgen-small** | 4.88 | 1.42 | 0.27 | - |
| facebook/musicgen-medium | 5.14 | 1.38 | 0.28 | - |
| facebook/musicgen-large | 5.48 | 1.37 | 0.28 | - |
| facebook/musicgen-melody | 4.93 | 1.41 | 0.27 | 0.44 |
More information can be found in the paper [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284), in the Results section.
## Limitations and biases
**Data:** The data sources used to train the model are created by music professionals and covered by legal agreements with the right holders. The model is trained on 20K hours of data, we believe that scaling the model on larger datasets can further improve the performance of the model.
**Mitigations:** Vocals have been removed from the data source using corresponding tags, and then using a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs).
**Limitations:**
- The model is not able to generate realistic vocals.
- The model has been trained with English descriptions and will not perform as well in other languages.
- The model does not perform equally well for all music styles and cultures.
- The model sometimes generates end of songs, collapsing to silence.
- It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results.
**Biases:** The source of data is potentially lacking diversity and all music cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres that exists. The generated samples from the model will reflect the biases from the training data. Further work on this model should include methods for balanced and just representations of cultures, for example, by scaling the training data to be both diverse and inclusive.
**Risks and harms:** Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data.
**Use cases:** Users must be aware of the biases, limitations and risks of the model. MusicGen is a model developed for artificial intelligence research on controllable music generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks. |
Kijai/spo-loras-comfyui | Kijai | 2024-06-21T12:30:23Z | 0 | 7 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-06-08T21:28:05Z | ---
license: apache-2.0
---
Extracted lora from: https://huggingface.co/SPO-Diffusion-Models/SPO-SDXL_4k-p_10ep |
Niki548/prot_bert-fine-tuned-toxicity_3 | Niki548 | 2024-06-21T12:27:40Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:Rostlab/prot_bert",
"base_model:finetune:Rostlab/prot_bert",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-21T12:03:33Z | ---
base_model: Rostlab/prot_bert
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: prot_bert-fine-tuned-toxicity_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# prot_bert-fine-tuned-toxicity_3
This model is a fine-tuned version of [Rostlab/prot_bert](https://huggingface.co/Rostlab/prot_bert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0450
- Accuracy: 0.7419
- Precision: 0.7530
- Recall: 0.7419
- F1: 0.7340
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.5735 | 1.0 | 16 | 0.5968 | 0.8065 | 0.8093 | 0.8065 | 0.8044 |
| 0.4855 | 2.0 | 32 | 0.4939 | 0.8387 | 0.8391 | 0.8387 | 0.8380 |
| 0.3499 | 3.0 | 48 | 0.5234 | 0.8065 | 0.8093 | 0.8065 | 0.8044 |
| 0.3094 | 4.0 | 64 | 0.4639 | 0.8387 | 0.8391 | 0.8387 | 0.8380 |
| 0.235 | 5.0 | 80 | 0.5654 | 0.8387 | 0.8391 | 0.8387 | 0.8380 |
| 0.1608 | 6.0 | 96 | 0.6409 | 0.8387 | 0.8391 | 0.8387 | 0.8380 |
| 0.1147 | 7.0 | 112 | 0.6271 | 0.8387 | 0.8391 | 0.8387 | 0.8380 |
| 0.171 | 8.0 | 128 | 0.7704 | 0.8065 | 0.8093 | 0.8065 | 0.8044 |
| 0.1762 | 9.0 | 144 | 0.7978 | 0.8065 | 0.8093 | 0.8065 | 0.8044 |
| 0.1438 | 10.0 | 160 | 0.9561 | 0.7419 | 0.7530 | 0.7419 | 0.7340 |
| 0.0953 | 11.0 | 176 | 1.0074 | 0.7419 | 0.7530 | 0.7419 | 0.7340 |
| 0.0394 | 12.0 | 192 | 1.0037 | 0.7419 | 0.7530 | 0.7419 | 0.7340 |
| 0.0896 | 13.0 | 208 | 1.0128 | 0.7419 | 0.7530 | 0.7419 | 0.7340 |
| 0.0849 | 14.0 | 224 | 1.0404 | 0.7419 | 0.7530 | 0.7419 | 0.7340 |
| 0.0207 | 15.0 | 240 | 1.0450 | 0.7419 | 0.7530 | 0.7419 | 0.7340 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
MILVLG/Imp-v1.5-2B-Qwen1.5 | MILVLG | 2024-06-21T12:23:01Z | 17 | 1 | transformers | [
"transformers",
"safetensors",
"imp_qwen2",
"text-generation",
"conversational",
"custom_code",
"dataset:liuhaotian/LLaVA-Pretrain",
"dataset:liuhaotian/LLaVA-Instruct-150K",
"arxiv:2405.12107",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-05-20T13:58:06Z | ---
license: apache-2.0
pipeline_tag: text-generation
datasets:
- liuhaotian/LLaVA-Pretrain
- liuhaotian/LLaVA-Instruct-150K
---
# 😈 Imp
\[[Paper](https://arxiv.org/abs/2405.12107)\] [[Demo](https://xmbot.net/imp/)\] [[Github](https://github.com/MILVLG/imp)\]
## Introduction
The Imp project aims to provide a family of highly capable yet lightweight LMMs. Our `Imp-v1.5-2B-Qwen1.5` is a strong lightweight LMM with only **2B** parameters, which is build upon [Qwen1.5-1.8B-Chat ](https://huggingface.co/Qwen/Qwen1.5-1.8B-Chat)(1.8B) and a powerful visual encoder [SigLIP ](https://huggingface.co/google/siglip-so400m-patch14-384)(0.4B), and trained on on 1M mixed dataset.
As shown in the Table below, `Imp-v1.5-2B-Qwen1.5` significantly outperforms the counterparts of similar model sizes.
We release our model weights and provide an example below to run our model . Detailed technical report and corresponding training/evaluation code will be released soon on our [GitHub repo](https://github.com/MILVLG/imp). We will persistently improve our model and release the next versions to further improve model performance :)
## How to use
**Install dependencies**
```bash
pip install transformers # latest version is ok, but we recommend v4.36.0
pip install -q pillow accelerate einops
```
You can use the following code for model inference. The format of text instruction is similar to [LLaVA](https://github.com/haotian-liu/LLaVA). Note that the example can only be run on GPUs currently.
```Python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from PIL import Image
torch.set_default_device("cuda")
#Create model
model = AutoModelForCausalLM.from_pretrained(
"MILVLG/Imp-v1.5-2B-Qwen1.5/",
torch_dtype=torch.float16,
device_map="auto",
trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("MILVLG/Imp-v1.5-2B-Qwen1.5", trust_remote_code=True)
#Set inputs
text = "<|im_start|>system\nA chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.<|im_end|>\n<|im_start|>user\n<image>\nWhat are the colors of the bus in the image?<|im_end|>\n<|im_start|>assistant"
image = Image.open("images/bus.jpg")
input_ids = tokenizer(text, return_tensors='pt').input_ids
image_tensor = model.image_preprocess(image)
#Generate the answer
output_ids = model.generate(
input_ids,
max_new_tokens=100,
images=image_tensor,
use_cache=True)[0]
print(tokenizer.decode(output_ids[input_ids.shape[1]:], skip_special_tokens=True).strip())
```
## Model evaluation
We conduct evaluation on 9 commonly-used benchmarks, including 5 academic VQA benchmarks and 4 popular MLLM benchmarks, to compare our Imp model with LLaVA (7B) and existing lightweight LMMs of similar model sizes.
| Models | Size | VQAv2 | GQA | SQA(IMG) | TextVQA | POPE | MME(P) | MMB |MMBCN |MM-Vet|
|:--------:|:-----:|:----:|:-------------:|:--------:|:-----:|:----:|:-------:|:-------:|:-------:|:-------:|
| [Mini-Gemini-2B](https://github.com/dvlab-research/MGM) | 2B |- | -| 56.2 |-| -| **1341** | 59.8 |- | 31.1|
| [Bunny-v1.0-2B-zh](https://huggingface.co/BAAI/Bunny-v1_0-2B-zh) | 2B |76.6 | 59.6| 64.6 |-| 85.8 | 1300.8 | 59.1 |58.5 | -|
| **Imp-v1.5-2B-Qwen1.5** | 2B | **79.2** | **61.9** | **66.1**| **54.5** | **86.7**| 1304.8 | **63.8**| **61.3** |**33.5**|
## License
This project is licensed under the Apache License 2.0 - see the [LICENSE](https://www.apache.org/licenses/LICENSE-2.0) file for details.
## Citation
If you use our model or refer our work in your studies, please cite:
```bibtex
@article{imp2024,
title={Imp: Highly Capable Large Multimodal Models for Mobile Devices},
author={Shao, Zhenwei and Yu, Zhou and Yu, Jun and Ouyang, Xuecheng and Zheng, Lihao and Gai, Zhenbiao and Wang, Mingyang and Ding, Jiajun},
journal={arXiv preprint arXiv:2405.12107},
year={2024}
}
``` |
MILVLG/Imp-v1.5-3B-Phi2 | MILVLG | 2024-06-21T12:22:31Z | 17 | 1 | transformers | [
"transformers",
"safetensors",
"imp",
"text-generation",
"custom_code",
"dataset:liuhaotian/LLaVA-Pretrain",
"dataset:liuhaotian/LLaVA-Instruct-150K",
"arxiv:2405.12107",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-05-20T12:58:36Z | ---
license: apache-2.0
pipeline_tag: text-generation
datasets:
- liuhaotian/LLaVA-Pretrain
- liuhaotian/LLaVA-Instruct-150K
---
# 😈 Imp
\[[Paper](https://arxiv.org/abs/2405.12107)\] [[Demo](https://xmbot.net/imp/)\] [[Github](https://github.com/MILVLG/imp)\]
## Introduction
The Imp project aims to provide a family of highly capable yet lightweight LMMs. Our `Imp-v1.5-3B-Phi2` is a strong lightweight LMMs with only **3B** parameters, which is build upon [Phi-2 ](https://huggingface.co/microsoft/phi-2)(2.7B) and a powerful visual encoder [SigLIP ](https://huggingface.co/google/siglip-so400m-patch14-384)(0.4B), and trained on 1M mixed dataset.
As shown in the Table below, `Imp-v1.5-3B-Phi2` significantly outperforms the counterparts of similar model sizes, and even achieves slightly better performance than the strong LLaVA-7B model on various multimodal benchmarks.
We release our model weights and provide an example below to run our model . Detailed technical report and corresponding training/evaluation code will be released soon on our [GitHub repo](https://github.com/MILVLG/imp). We will persistently improve our model and release the next versions to further improve model performance :)
## How to use
**Install dependencies**
```bash
pip install transformers # latest version is ok, but we recommend v4.37.0
pip install -q pillow accelerate einops
```
You can use the following code for model inference. The format of text instruction is similar to [LLaVA](https://github.com/haotian-liu/LLaVA). Note that the example can only be run on GPUs currently.
```Python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from PIL import Image
torch.set_default_device("cuda")
#Create model
model = AutoModelForCausalLM.from_pretrained(
"MILVLG/Imp-v1.5-3B-Phi2/",
torch_dtype=torch.float16,
device_map="auto",
trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("MILVLG/Imp-v1.5-3B-Phi2", trust_remote_code=True)
#Set inputs
text = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: <image>\nWhat are the colors of the bus in the image? ASSISTANT:"
image = Image.open("images/bus.jpg")
input_ids = tokenizer(text, return_tensors='pt').input_ids
image_tensor = model.image_preprocess(image)
#Generate the answer
output_ids = model.generate(
input_ids,
max_new_tokens=100,
images=image_tensor,
use_cache=True)[0]
print(tokenizer.decode(output_ids[input_ids.shape[1]:], skip_special_tokens=True).strip())
```
## Model evaluation
We conduct evaluation on 9 commonly-used benchmarks, including 5 academic VQA benchmarks and 4 popular MLLM benchmarks, to compare our Imp model with LLaVA (7B) and existing lightweight LMMs of similar model sizes.
| Models | Size | VQAv2 | GQA | SQA(IMG) | TextVQA | POPE | MME(P) | MMB |MMBCN |MM-Vet|
|:--------:|:-----:|:----:|:-------------:|:--------:|:-----:|:----:|:-------:|:-------:|:-------:|:-------:|
| [LLaVA-v1.5-lora](https://huggingface.co/liuhaotian/llava-v1.5-7b) | 7B |79.1 | 63.0| 68.4 |58.2| 86.4 | 1476.9 | 66.1 |- |30.2|
| [TinyGPT-V-3B](https://huggingface.co/Tyrannosaurus/TinyGPT-V) | 3B | - | 38.9 | - | - | -| - | - |- |-|
| [LaVA-Phi-3B](https://github.com/zhuyiche/llava-phi) | 3B | 71.4 | - | 68.4 | 48.6 | 85.0 | 1335.1 | 59.8 |-|28.9|
| [MobileVLM-3B](https://huggingface.co/mtgv/MobileVLM-3B) | 3B | - | 59.0 | 61.0 | 47.5 | 84.9 | 1288.9 | 59.6 |- |-|
| [MiniCPM-V-3B](https://huggingface.co/mtgv/MobileVLM-3B) | 3B | - |- | - | - | - | 1452.0 | 67.9 | **65.3**|-|
| [Bunny-3B](https://huggingface.co/visheratin/MC-LLaVA-3b) | 3B | 79.8 | 62.5 | 70.9 | - | 86.8| **1488.8** | 68.6 |- |-|
| **Imp-v1.5-3B-Phi2** | 3B | **81.2** | **63.5** | **72.8**| **59.8** | **88.9**| 1446.4 | **72.9**| 46.7 |**43.3**|
## License
This project is licensed under the Apache License 2.0 - see the [LICENSE](https://www.apache.org/licenses/LICENSE-2.0) file for details.
## Citation
If you use our model or refer our work in your studies, please cite:
```bibtex
@article{imp2024,
title={Imp: Highly Capable Large Multimodal Models for Mobile Devices},
author={Shao, Zhenwei and Yu, Zhou and Yu, Jun and Ouyang, Xuecheng and Zheng, Lihao and Gai, Zhenbiao and Wang, Mingyang and Ding, Jiajun},
journal={arXiv preprint arXiv:2405.12107},
year={2024}
}
``` |
MILVLG/Imp-v1.5-4B-Phi3 | MILVLG | 2024-06-21T12:22:00Z | 11 | 6 | transformers | [
"transformers",
"safetensors",
"imp_phi3",
"text-generation",
"conversational",
"custom_code",
"dataset:liuhaotian/LLaVA-Pretrain",
"dataset:liuhaotian/LLaVA-Instruct-150K",
"arxiv:2405.12107",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-05-20T12:04:35Z | ---
license: apache-2.0
pipeline_tag: text-generation
datasets:
- liuhaotian/LLaVA-Pretrain
- liuhaotian/LLaVA-Instruct-150K
---
# 😈 Imp
\[[Paper](https://arxiv.org/abs/2405.12107)\] [[Demo](https://xmbot.net/imp/)\] [[Github](https://github.com/MILVLG/imp)\]
## Introduction
The Imp project aims to provide a family of highly capable yet lightweight LMMs. Our `Imp-v1.5-4B-Phi3` is a strong lightweight LMMs with only **4B** parameters, which is build upon [Phi-3 ](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct)(3.8B) and a powerful visual encoder [SigLIP ](https://huggingface.co/google/siglip-so400m-patch14-384)(0.4B), and trained on 1M mixed dataset.
We release our model weights and provide an example below to run our model . Detailed technical report and corresponding training/evaluation code will be released soon on our [GitHub repo](https://github.com/MILVLG/imp). We will persistently improve our model and release the next versions to further improve model performance :)
## How to use
**Install dependencies**
```bash
pip install transformers # latest version is ok, but we recommend v4.36.0
pip install -q pillow accelerate einops
```
You can use the following code for model inference. The format of text instruction is similar to [LLaVA](https://github.com/haotian-liu/LLaVA). Note that the example can only be run on GPUs currently.
```Python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from PIL import Image
torch.set_default_device("cuda")
#Create model
model = AutoModelForCausalLM.from_pretrained(
"MILVLG/Imp-v1.5-4B-Phi3/",
torch_dtype=torch.float16,
device_map="auto",
trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("MILVLG/Imp-v1.5-4B-Phi3", trust_remote_code=True)
#Set inputs
text = "<|user|>\n<image>\nWhat are the colors of the bus in the image?\n<|end|>\n<|assistant|>\n"
image = Image.open("images/bus.jpg")
input_ids = tokenizer(text, return_tensors='pt').input_ids
image_tensor = model.image_preprocess(image)
#Generate the answer
output_ids = model.generate(
input_ids,
max_new_tokens=100,
images=image_tensor,
use_cache=True)[0]
print(tokenizer.decode(output_ids[input_ids.shape[1]:], skip_special_tokens=True).strip())
```
## Model evaluation
We conduct evaluation on 9 commonly-used benchmarks, including 5 academic VQA benchmarks and 4 popular MLLM benchmarks, to compare our Imp model with LLaVA (7B) and existing lightweight LMMs of similar model sizes.
| Models | Size | VQAv2 | GQA | SQA(IMG) | TextVQA | POPE | MME(P) | MMB |MMB_CN|MM-Vet|
|:--------:|:-----:|:----:|:-------------:|:--------:|:-----:|:----:|:-------:|:-------:|:-------:|:-------:|
| Bunny-v1.0-4B| 4B | **81.5** |**63.5** | 75.1|- | 86.7| 1495.2 |**73.5** |-|-|
| **Imp-v1.5-4B-Phi3**| 4B | **81.5** | **63.5** | **78.3**|60.2 | **86.9**| **1507.7** |73.3 |61.1|44.6|
## License
This project is licensed under the Apache License 2.0 - see the [LICENSE](https://www.apache.org/licenses/LICENSE-2.0) file for details.
## Citation
If you use our model or refer our work in your studies, please cite:
```bibtex
@article{imp2024,
title={Imp: Highly Capable Large Multimodal Models for Mobile Devices},
author={Shao, Zhenwei and Yu, Zhou and Yu, Jun and Ouyang, Xuecheng and Zheng, Lihao and Gai, Zhenbiao and Wang, Mingyang and Ding, Jiajun},
journal={arXiv preprint arXiv:2405.12107},
year={2024}
}
``` |
Cicistawberry/y2k-lora | Cicistawberry | 2024-06-21T12:15:22Z | 9 | 0 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"diffusers-training",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-06-21T05:30:37Z | ---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- diffusers-training
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: 'A <s0><s1> ad, Coca-Cola'
output:
url:
"image_0.png"
- text: 'A <s0><s1> ad, an orange llama eating ramen'
output:
url:
"image_1.png"
- text: 'A <s0><s1> ad, an astronaut riding a horse'
output:
url:
"image_2.png"
- text: 'A <s0><s1> ad, flying cupcakes and glitter'
output:
url:
"image_3.png"
- text: 'A <s0><s1> ad, the movie Harry Potter'
output:
url:
"image_4.png"
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: 3d icon in the style of <s0><s1>
license: openrail++
---
# SDXL LoRA DreamBooth - Cicistawberry/y2k-lora
<Gallery />
## Model description
### These are Cicistawberry/y2k-lora LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`y2k-lora.safetensors` here 💾](/Cicistawberry/y2k-lora/blob/main/y2k-lora.safetensors)**.
- Place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:y2k-lora:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
- *Embeddings*: download **[`y2k-lora_emb.safetensors` here 💾](/Cicistawberry/y2k-lora/blob/main/y2k-lora_emb.safetensors)**.
- Place it on it on your `embeddings` folder
- Use it by adding `y2k-lora_emb` to your prompt. For example, `3d icon in the style of y2k-lora_emb`
(you need both the LoRA and the embeddings as they were trained together for this LoRA)
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusioy2k-loran-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Cicistawberry/y2k-lora', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='Cicistawberry/y2k-lora', filename='y2k-lora_emb.safetensors', repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
image = pipeline('A <s0><s1> ad, Coca-Cola').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` → use `<s0><s1>` in your prompt
## Details
All [Files & versions](/Cicistawberry/y2k-lora/tree/main).
The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
ctrlg/gpt2-large_common-gen | ctrlg | 2024-06-21T12:09:01Z | 207 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-06-21T12:05:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ctrlg/hmm_gpt2-large_common-gen_4096 | ctrlg | 2024-06-21T11:59:02Z | 406 | 0 | transformers | [
"transformers",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"endpoints_compatible",
"region:us"
] | null | 2024-06-21T11:58:37Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
anderloh/testV4 | anderloh | 2024-06-21T11:57:58Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:superb",
"base_model:anderloh/Hugginhface-master-wav2vec-pretreined-5-class-train-test",
"base_model:finetune:anderloh/Hugginhface-master-wav2vec-pretreined-5-class-train-test",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2024-06-20T07:55:37Z | ---
base_model: anderloh/Hugginhface-master-wav2vec-pretreined-5-class-train-test
tags:
- audio-classification
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: testV4
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: superb
type: superb
config: ks
split: validation
args: ks
metrics:
- name: Accuracy
type: accuracy
value: 0.6209179170344219
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# testV4
This model is a fine-tuned version of [anderloh/Hugginhface-master-wav2vec-pretreined-5-class-train-test](https://huggingface.co/anderloh/Hugginhface-master-wav2vec-pretreined-5-class-train-test) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9986
- Accuracy: 0.6209
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 0
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.3236 | 1.0 | 100 | 1.9986 | 0.6209 |
| 1.6222 | 2.0 | 200 | 1.5412 | 0.6209 |
| 1.4879 | 3.0 | 300 | 1.5255 | 0.6209 |
| 1.4797 | 4.0 | 400 | 1.5182 | 0.6209 |
| 1.4761 | 5.0 | 500 | 1.5151 | 0.6209 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
SotiriosKastanas/lmsys-vicuna-13b-v1.5-8-bit-bnb | SotiriosKastanas | 2024-06-21T11:57:45Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-06-21T11:53:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Dhahlan2000/Chitti-Base-model-for-GPT-v10 | Dhahlan2000 | 2024-06-21T11:57:23Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:Dhahlan2000/Chitti-Base-model-for-GPT-v9",
"base_model:finetune:Dhahlan2000/Chitti-Base-model-for-GPT-v9",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-06-21T11:57:12Z | ---
license: apache-2.0
base_model: Dhahlan2000/Chitti-Base-model-for-GPT-v9
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: Chitti-Base-model-for-GPT-v10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Chitti-Base-model-for-GPT-v10
This model is a fine-tuned version of [Dhahlan2000/Chitti-Base-model-for-GPT-v9](https://huggingface.co/Dhahlan2000/Chitti-Base-model-for-GPT-v9) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9366
- Bleu: 3.8252
- Gen Len: 12.4727
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 3.1655 | 1.0 | 9282 | 2.9457 | 3.7112 | 12.4933 |
| 3.1308 | 2.0 | 18564 | 2.9366 | 3.8252 | 12.4727 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
LeroyDyer/SpydazWeb_AI_Swahili_Project | LeroyDyer | 2024-06-21T11:54:40Z | 71 | 3 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"sw",
"dataset:uonlp/CulturaX",
"dataset:Rogendo/English-Swahili-Sentence-Pairs",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-10T09:18:18Z | ---
license: mit
datasets:
- uonlp/CulturaX
- Rogendo/English-Swahili-Sentence-Pairs
language:
- en
- sw
---
## UPDATED !!!## UPDATED !!!
this model has been updted for rag and tasks in swahili or english as well as prgramming and other school works stuff and bible ! as well as other sacred historical texts also !
This model will be fully swahili speaking despite being adapted from and english speaking model :
All training applied will be in swahili or other dialects @
## UPDATED !!!
undergoing fine tuning stages as well as merging stages and retuning stages !
Searching for instruct datasets in swahili
this is a super fine tuned model ....
but it may be behind other models: in the series :
Hence this model is for applying lora adapter found on the hub and other created for other models :
once applying a lora , set the model in train mode: model.train()
And Train on a previoulsy trained dataset before merging the new lora :
make sure the prvious dataset still is inline with the model :
Often a lora can nudge the model the wrong way and loose some of its previous training as it applys weights on top of the odel which may net be consistant with your model especially if the lora was not trained for this model (but still for the same series (ie mistral))..
|
SotiriosKastanas/lmsys-vicuna-7b-v1.5-8-bit-bnb | SotiriosKastanas | 2024-06-21T11:51:59Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-06-21T11:49:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LeroyDyer/Mixtral_AI_SwahiliTron_4BIT | LeroyDyer | 2024-06-21T11:49:30Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-06-21T11:46:58Z | ---
base_model: LeroyDyer/Spydaz_Web_AI
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
---
# Uploaded model
- **Developed by:** LeroyDyer
- **License:** apache-2.0
- **Finetuned from model :** LeroyDyer/Spydaz_Web_AI
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
SidXXD/diffusion_attack_cat_background | SidXXD | 2024-06-21T11:47:20Z | 4 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"custom-diffusion",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-06-21T11:29:23Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
instance_prompt: photo of a <v1*> cat
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- custom-diffusion
inference: true
---
# Custom Diffusion - SidXXD/diffusion_attack_cat_background
These are Custom Diffusion adaption weights for stabilityai/stable-diffusion-2-1-base. The weights were trained on photo of a <v1*> cat using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following.
For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
|
excalibur12/wav2vec2-large-lv60_scoring-phoneme-asr-timit_l2-english_nia037-pic-20s-11k_001 | excalibur12 | 2024-06-21T11:43:12Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"generated_from_trainer",
"base_model:facebook/wav2vec2-large-lv60",
"base_model:finetune:facebook/wav2vec2-large-lv60",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-21T02:09:36Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-large-lv60
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-lv60_scoring-phoneme-asr-timit_l2-english_nia037-pic-20s-11k_001
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-lv60_scoring-phoneme-asr-timit_l2-english_nia037-pic-20s-11k_001
This model is a fine-tuned version of [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 6.1588
- eval_per: 0.9750
- eval_pcc: 0.6972
- eval_ctc_loss: 4.9763
- eval_mse_loss: 1.1825
- eval_runtime: 698.9423
- eval_samples_per_second: 2.107
- eval_steps_per_second: 2.107
- epoch: 8.01
- step: 6000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 750
- training_steps: 7500
### Framework versions
- Transformers 4.38.1
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.2
|
MugheesAwan11/bge-base-securiti-dataset-3-v23 | MugheesAwan11 | 2024-06-21T11:42:19Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:1496",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-06-21T11:42:04Z | ---
base_model: BAAI/bge-base-en-v1.5
datasets: []
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_ndcg@80
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:1496
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: We are currently involved in, and may in the future be involved
in, legal proceedings, claims, and government investigations in the ordinary course
of business. These include proceedings, claims, and investigations relating to,
among other things, regulatory matters, commercial matters, intellectual property,
competition, tax, employment, pricing, discrimination, consumer rights, personal
injury, and property rights.
sentences:
- What factors does the regulatory authority consider when ensuring data protection
in cross border transfers in Zimbabwe?
- How does Securiti enable enterprises to safely use data and the cloud while managing
security, privacy, and compliance risks?
- What types of legal issues is the company currently involved in?
- source_sentence: The Company’s minority market share in the global smartphone, personal
computer and tablet markets can make developers less inclined to develop or upgrade
software for the Company’s products and more inclined to devote their resources
to developing and upgrading software for competitors’ products with larger market
share. When developers focus their efforts on these competing platforms, the availability
and quality of applications for the Company’s devices can suffer.
sentences:
- What is the role of obtaining consent in Thailand's PDPA?
- Why might developers be less inclined to develop or upgrade software for the Company's
products?
- What caused the increase in energy generation and storage segment revenue in 2023?
- source_sentence: '** : EMEA (Europe, the Middle East and Africa) The Irish DPA implements
the GDPR into the national law by incorporating most of the provisions of the
GDPR with limited additions and deletions. It contains several provisions restricting
data subjects’ rights that they generally have under the GDPR, for example, where
restrictions are necessary for the enforcement of civil law claims. Resources*
: Irish DPA Overview Irish Cookie Guidance ### Japan #### Japan’s Act on the Protection
of Personal Information (APPI) **Effective Date (Amended APPI)** : April 01, 2022
**Region** : APAC (Asia-Pacific) Japan’s APPI regulates personal related information
and applies to any Personal Information Controller (the “PIC''''), that is a person
or entity providing personal related information for use in business in Japan.
The APPI also applies to the foreign'
sentences:
- What are the requirements for CIIOs and personal information processors in the
state cybersecurity department regarding cross-border data transfers and certifications?
- How does the Irish DPA implement the GDPR into national law?
- What is the current status of the Personal Data Protection Act in El Salvador
compared to Monaco and Venezuela?
- source_sentence: View Salesforce View Workday View GCP View Azure View Oracle View
US California CCPA View US California CPRA View European Union GDPR View Thailand’s
PDPA View China PIPL View Canada PIPEDA View Brazil's LGPD View \+ More View Privacy
View Security View Governance View Marketing View Resources Blog View Collateral
View Knowledge Center View Securiti Education View Company About Us View Partner
Program View Contact Us View News Coverage
sentences:
- What is the role of ANPD in ensuring LGPD compliance and protecting data subject
rights, including those related to health professionals?
- According to the Spanish data protection law, who is required to hire a DPO if
they possess certain information in the event of a data breach?
- What is GCP and how does it relate to privacy, security, governance, marketing,
and resources?
- source_sentence: 'vital interests of the data subject; Complying with an obligation
prescribed in PDPL, not being a contractual obligation, or complying with an order
from a competent court, the Public Prosecution, the investigation Judge, or the
Military Prosecution; or Preparing or pursuing a legal claim or defense. vs Articles:
44 50, Recitals: 101, 112 GDPR states that personal data shall be transferred
to a third country or international organization with an adequate protection level
as determined by the EU Commission. Suppose there is no decision on an adequate
protection level. In that case, a transfer is only permitted when the data controller
or data processor provides appropriate safeguards that ensure data subject rights.
Appropriate safeguards include: BCRs with specific requirements (e.g., a legal
basis for processing, a retention period, and complaint procedures) Standard data
protection clauses adopted by the EU Commission, level of protection. If there
is no adequate level of protection, then data controllers in Turkey and abroad
shall commit, in writing, to provide an adequate level of protection abroad, as
well as agree on the fact that the transfer is permitted by the Board of KVKK.
vs Articles 44 50 Recitals 101, 112 GDPR states that personal data shall be transferred
to a third country or international organization with an adequate protection level
as determined by the EU Commission. Suppose there is no decision on an adequate
protection level. In that case, a transfer is only permitted when the data controller
or data processor provides appropriate safeguards that ensure data subject'' rights.
Appropriate safeguards include: BCRs with specific requirements (e.g., a legal
basis for processing, a retention period, and complaint procedures); standard
data protection clauses adopted by the EU Commission or by a supervisory authority;
an approved code'
sentences:
- What is the right to be informed in relation to personal data?
- In what situations can a controller process personal data to protect vital interests?
- What obligations in PDPL must data controllers or processors meet to protect personal
data transferred to a third country or international organization?
model-index:
- name: SentenceTransformer based on BAAI/bge-base-en-v1.5
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.4020618556701031
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5773195876288659
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6804123711340206
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7938144329896907
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.4020618556701031
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.1924398625429553
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1360824742268041
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.07938144329896907
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.4020618556701031
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.5773195876288659
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.6804123711340206
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.7938144329896907
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5832092053824987
name: Cosine Ndcg@10
- type: cosine_ndcg@80
value: 0.6222698401457883
name: Cosine Ndcg@80
- type: cosine_mrr@10
value: 0.5174930453280969
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.5253009685878662
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.41237113402061853
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5670103092783505
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6597938144329897
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7938144329896907
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.41237113402061853
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.18900343642611683
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1319587628865979
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.07938144329896907
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.41237113402061853
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.5670103092783505
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.6597938144329897
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.7938144329896907
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5860165941440372
name: Cosine Ndcg@10
- type: cosine_ndcg@80
value: 0.6252535691605303
name: Cosine Ndcg@80
- type: cosine_mrr@10
value: 0.5218622156766489
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.5297061448856729
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.41237113402061853
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5979381443298969
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6494845360824743
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7628865979381443
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.41237113402061853
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.1993127147766323
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.12989690721649483
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.07628865979381441
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.41237113402061853
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.5979381443298969
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.6494845360824743
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.7628865979381443
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5782766042135054
name: Cosine Ndcg@10
- type: cosine_ndcg@80
value: 0.6240012013315989
name: Cosine Ndcg@80
- type: cosine_mrr@10
value: 0.5207167403043692
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.5307304570652817
name: Cosine Map@100
---
# SentenceTransformer based on BAAI/bge-base-en-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("MugheesAwan11/bge-base-securiti-dataset-3-v23")
# Run inference
sentences = [
"vital interests of the data subject; Complying with an obligation prescribed in PDPL, not being a contractual obligation, or complying with an order from a competent court, the Public Prosecution, the investigation Judge, or the Military Prosecution; or Preparing or pursuing a legal claim or defense. vs Articles: 44 50, Recitals: 101, 112 GDPR states that personal data shall be transferred to a third country or international organization with an adequate protection level as determined by the EU Commission. Suppose there is no decision on an adequate protection level. In that case, a transfer is only permitted when the data controller or data processor provides appropriate safeguards that ensure data subject rights. Appropriate safeguards include: BCRs with specific requirements (e.g., a legal basis for processing, a retention period, and complaint procedures) Standard data protection clauses adopted by the EU Commission, level of protection. If there is no adequate level of protection, then data controllers in Turkey and abroad shall commit, in writing, to provide an adequate level of protection abroad, as well as agree on the fact that the transfer is permitted by the Board of KVKK. vs Articles 44 50 Recitals 101, 112 GDPR states that personal data shall be transferred to a third country or international organization with an adequate protection level as determined by the EU Commission. Suppose there is no decision on an adequate protection level. In that case, a transfer is only permitted when the data controller or data processor provides appropriate safeguards that ensure data subject' rights. Appropriate safeguards include: BCRs with specific requirements (e.g., a legal basis for processing, a retention period, and complaint procedures); standard data protection clauses adopted by the EU Commission or by a supervisory authority; an approved code",
'What obligations in PDPL must data controllers or processors meet to protect personal data transferred to a third country or international organization?',
'In what situations can a controller process personal data to protect vital interests?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_768`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.4021 |
| cosine_accuracy@3 | 0.5773 |
| cosine_accuracy@5 | 0.6804 |
| cosine_accuracy@10 | 0.7938 |
| cosine_precision@1 | 0.4021 |
| cosine_precision@3 | 0.1924 |
| cosine_precision@5 | 0.1361 |
| cosine_precision@10 | 0.0794 |
| cosine_recall@1 | 0.4021 |
| cosine_recall@3 | 0.5773 |
| cosine_recall@5 | 0.6804 |
| cosine_recall@10 | 0.7938 |
| cosine_ndcg@10 | 0.5832 |
| cosine_ndcg@80 | 0.6223 |
| cosine_mrr@10 | 0.5175 |
| **cosine_map@100** | **0.5253** |
#### Information Retrieval
* Dataset: `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.4124 |
| cosine_accuracy@3 | 0.567 |
| cosine_accuracy@5 | 0.6598 |
| cosine_accuracy@10 | 0.7938 |
| cosine_precision@1 | 0.4124 |
| cosine_precision@3 | 0.189 |
| cosine_precision@5 | 0.132 |
| cosine_precision@10 | 0.0794 |
| cosine_recall@1 | 0.4124 |
| cosine_recall@3 | 0.567 |
| cosine_recall@5 | 0.6598 |
| cosine_recall@10 | 0.7938 |
| cosine_ndcg@10 | 0.586 |
| cosine_ndcg@80 | 0.6253 |
| cosine_mrr@10 | 0.5219 |
| **cosine_map@100** | **0.5297** |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.4124 |
| cosine_accuracy@3 | 0.5979 |
| cosine_accuracy@5 | 0.6495 |
| cosine_accuracy@10 | 0.7629 |
| cosine_precision@1 | 0.4124 |
| cosine_precision@3 | 0.1993 |
| cosine_precision@5 | 0.1299 |
| cosine_precision@10 | 0.0763 |
| cosine_recall@1 | 0.4124 |
| cosine_recall@3 | 0.5979 |
| cosine_recall@5 | 0.6495 |
| cosine_recall@10 | 0.7629 |
| cosine_ndcg@10 | 0.5783 |
| cosine_ndcg@80 | 0.624 |
| cosine_mrr@10 | 0.5207 |
| **cosine_map@100** | **0.5307** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 1,496 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:-------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 67 tokens</li><li>mean: 216.99 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 21.6 tokens</li><li>max: 102 tokens</li></ul> |
* Samples:
| positive | anchor |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------|
| <code>Leader in Data Privacy View Events Spotlight Talks Education Contact Us Schedule a Demo Products By Use Cases By Roles Data Command Center View Learn more Asset and Data Discovery Discover dark and native data assets Learn more Data Access Intelligence & Governance Identify which users have access to sensitive data and prevent unauthorized access Learn more Data Privacy Automation PrivacyCenter.Cloud | Data Mapping | DSR Automation | Assessment Automation | Vendor Assessment | Breach Management | Privacy Notice Learn more Sensitive Data Intelligence Discover & Classify Structured and Unstructured Data | People Data Graph Learn more Data Flow Intelligence & Governance Prevent sensitive data sprawl through real-time streaming platforms Learn more Data Consent Automation First Party Consent | Third Party & Cookie</code> | <code>What is the purpose of the Data Command Center?</code> |
| <code>data subject must be notified of any such extension within one month of receiving the request, along with the reasons for the delay and the possibility of complaining to the supervisory authority. The right to restrict processing applies when the data subject contests data accuracy, the processing is unlawful, and the data subject opposes erasure and requests restriction. The controller must inform data subjects before any such restriction is lifted. Under GDPR, the data subject also has the right to obtain from the controller the rectification of inaccurate personal data and to have incomplete personal data completed. Article: 22 Under PDPL, if a decision is based solely on automated processing of personal data intended to assess the data subject regarding his/her performance at work, financial standing, credit-worthiness, reliability, or conduct, then the data subject has the right to request processing in a manner that is not solely automated. This right shall not apply where the decision is taken in the course of entering into</code> | <code>What is the requirement for notifying the data subject of any extension under GDPR and PDPL?</code> |
| <code>Automation PrivacyCenter.Cloud | Data Mapping | DSR Automation | Assessment Automation | Vendor Assessment | Breach Management | Privacy Notice Learn more Sensitive Data Intelligence Discover & Classify Structured and Unstructured Data | People Data Graph Learn more Data Flow Intelligence & Governance Prevent sensitive data sprawl through real-time streaming platforms Learn more Data Consent Automation First Party Consent | Third Party & Cookie Consent Learn more Data Security Posture Management Secure sensitive data in hybrid multicloud and SaaS environments Learn more Data Breach Impact Analysis & Response Analyze impact of a data breach and coordinate response per global regulatory obligations Learn more Data Catalog Automatically catalog datasets and enable users to find, understand, trust and access data Learn more Data Lineage Track changes and transformations of, PrivacyCenter.Cloud | Data Mapping | DSR Automation | Assessment Automation | Vendor Assessment | Breach Management | Privacy Notice Learn more Sensitive Data Intelligence Discover & Classify Structured and Unstructured Data | People Data Graph Learn more Data Flow Intelligence & Governance Prevent sensitive data sprawl through real-time streaming platforms Learn more Data Consent Automation First Party Consent | Third Party & Cookie Consent Learn more Data Security Posture Management Secure sensitive data in hybrid multicloud and SaaS environments Learn more Data Breach Impact Analysis & Response Analyze impact of a data breach and coordinate response per global regulatory obligations Learn more Data Catalog Automatically catalog datasets and enable users to find, understand, trust and access data Learn more Data Lineage Track changes and transformations of data throughout its</code> | <code>What is the purpose of Third Party & Cookie Consent in data automation and security?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256
],
"matryoshka_weights": [
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_768_cosine_map@100 |
|:-------:|:------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|
| 0.2128 | 10 | 3.8486 | - | - | - |
| 0.4255 | 20 | 2.3622 | - | - | - |
| 0.6383 | 30 | 2.3216 | - | - | - |
| 0.8511 | 40 | 1.3247 | - | - | - |
| **1.0** | **47** | **-** | **0.5307** | **0.5297** | **0.5253** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 0.31.0
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
morturr/flan-t5-small-amazon-text-classification | morturr | 2024-06-21T11:40:52Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text-classification",
"generated_from_trainer",
"base_model:google/flan-t5-small",
"base_model:finetune:google/flan-t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-04-11T11:11:17Z | ---
license: apache-2.0
base_model: google/flan-t5-small
tags:
- generated_from_trainer
model-index:
- name: flan-t5-small-amazon-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-small-amazon-text-classification
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.2
- Pytorch 2.3.1+cu121
- Datasets 2.10.1
- Tokenizers 0.15.2
|
ctrlg/hmm_gpt2-large_common-gen_32768 | ctrlg | 2024-06-21T11:35:40Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"endpoints_compatible",
"region:us"
] | null | 2024-06-21T11:04:27Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
LukasEder/ddpm-celebahq-finetuned-impressionsim | LukasEder | 2024-06-21T11:24:12Z | 6 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2024-06-21T11:23:46Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
Describe your model here
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('LukasEder/ddpm-celebahq-finetuned-impressionsim')
image = pipeline().images[0]
image
```
|
Vanster/ppo-Pyramids | Vanster | 2024-06-21T11:11:22Z | 14 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2024-06-19T13:00:26Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Vanster/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Dhahlan2000/Chitti-Base-model-for-GPT-v9 | Dhahlan2000 | 2024-06-21T11:04:34Z | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:Dhahlan2000/Chitti-Base-model-for-GPT-v8",
"base_model:finetune:Dhahlan2000/Chitti-Base-model-for-GPT-v8",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-06-21T11:04:21Z | ---
license: apache-2.0
base_model: Dhahlan2000/Chitti-Base-model-for-GPT-v8
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: Chitti-Base-model-for-GPT-v9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Chitti-Base-model-for-GPT-v9
This model is a fine-tuned version of [Dhahlan2000/Chitti-Base-model-for-GPT-v8](https://huggingface.co/Dhahlan2000/Chitti-Base-model-for-GPT-v8) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8886
- Bleu: 4.3804
- Gen Len: 12.624
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 3.1399 | 1.0 | 9282 | 2.9151 | 3.8681 | 12.7067 |
| 3.1171 | 2.0 | 18564 | 2.9030 | 4.1487 | 12.6253 |
| 3.0915 | 3.0 | 27846 | 2.8906 | 4.1716 | 12.722 |
| 3.0923 | 4.0 | 37128 | 2.8886 | 4.3804 | 12.624 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
vdavidr/CodeLlama-7b-Instruct-hf_Fi__size_52_epochs_10_2024-06-21_04-40-42_3556390 | vdavidr | 2024-06-21T11:03:00Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-21T03:06:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Ksgk-fy/genius_v2_merge | Ksgk-fy | 2024-06-21T11:00:00Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-21T03:50:02Z | ---
library_name: transformers
tags: []
---
What if Genius are uploaded through finetuning LLM with their podcast transcript? Chat with them and find out about it ;>

|
AdityaMa/mt5-base | AdityaMa | 2024-06-21T10:58:13Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-06-21T10:50:09Z | ---
license: apache-2.0
---
|
SaiprasadP/wavv | SaiprasadP | 2024-06-21T10:45:20Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"audio-classification",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | audio-classification | 2024-06-21T10:44:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
eustlb/distil-large-v3-fr | eustlb | 2024-06-21T10:42:16Z | 40 | 9 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"asr",
"fr",
"dataset:mozilla-foundation/common_voice_17_0",
"dataset:espnet/yodas",
"dataset:facebook/multilingual_librispeech",
"arxiv:2311.00430",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-30T09:22:16Z | ---
license: mit
datasets:
- mozilla-foundation/common_voice_17_0
- espnet/yodas
- facebook/multilingual_librispeech
language:
- fr
metrics:
- wer
pipeline_tag: automatic-speech-recognition
tags:
- asr
- whisper
model-index:
- name: distil-large-v3-fr
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 17.0, short-form (<= 30sec)
type: mozilla-foundation/common_voice_17_0
config: fr
split: test
args:
language: fr
metrics:
- name: WER
type: wer
value: 12.675
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Multilingual LibriSpeech, short-form (<= 30sec)
type: facebook/multilingual_librispeech
config: french
split: test
args:
language: fr
metrics:
- name: WER
type: wer
value: 5.865
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: VoxPopuli, short-form (<= 30sec)
type: facebook/voxpopuli
config: fr
split: test
args:
language: fr
metrics:
- name: WER
type: wer
value: 10.832
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Fleurs, short-form (<= 30sec)
type: google/fleurs
config: fr_fr
split: test
args:
language: fr
metrics:
- name: WER
type: wer
value: 7.989
---
# Distil-Whisper: distil-large-v3-fr
Distil-Whisper for English Automatic Speech Recognition (ASR) was proposed in the paper [Robust Knowledge Distillation via Large-Scale Pseudo Labelling](https://arxiv.org/abs/2311.00430).
This is the knowledge distilled version of OpenAI's [Whisper large-v3](https://huggingface.co/openai/whisper-large-v3) for French ASR.
The result is a distilled model that performs within **2% WER of [Whisper large-v3](https://huggingface.co/openai/whisper-large-v3)** on out-of-distribution evaluation sets for both short-form and long form transcription. Moreover, it is **5.9x** faster than [Whisper large-v3](https://huggingface.co/openai/whisper-large-v3) and **1.3** times faster than the tiniest version of whisper while being incomparably more accurate.
| Model | Params (M) | Rel. Latency | Short-Form WER | Long-Form WER |
| :--------------------- | :--------: | :----------: | :------------: | :-----------: |
| whisper-tiny | 37.8 | 4.7 | 43.7 | 28.2 |
| whisper-base | 72.6 | 3.7 | 30.6 | 18.7 |
| whisper-small | 242 | 2.3 | 16.2 | 12.6 |
| whisper-medium | 764 | 1.3 | 11.7 | 11.0 |
| whisper-large-v3 | 1540 | 1.0 | 7.8 | 9.0 |
| **distil-large-v3-fr** | **756** | **5.9** | **9.3** | **11.1** |
*latencies benchmarked to generate 128 tokens on A100 40GB with a batch size of 1. More details about inference performances in [inference speed](#inference-speed) section.
*WERs are averaged on the test sets. More details in [short-form](#short-form) and [long-form](#long-form) results sections.
## Table of Contents
1. [Transformers Usage](#transformers-usage)
* [Short-Form Transcription](#short-form-transcription)
* [Sequential Long-Form](#sequential-long-form)
* [Chunked Long-Form](#chunked-long-form)
* [Speculative Decoding](#speculative-decoding)
* [Additional Speed and Memory Improvements](#additional-speed--memory-improvements)
2. [Library Integrations](#library-integrations)
* [Whisper cpp](#whispercpp)
* [Transformers.js](#transformersjs)
3. [Model Details](#model-details)
* [Architecture](#architecture)
* [Training](#training)
4. [Results](#results)
* [Evaluation methodology](#evaluation-methodology)
* [Short-Form](#short-form)
* [Long-Form](#long-form)
* [Inference Speed](#inference-speed)
4. [License](#license)
## Transformers Usage
distil-large-v3-fr is supported in the Hugging Face 🤗 Transformers library from version 4.41 onwards. To run the model, first
install the latest version of Transformers. For this example, we'll also install 🤗 Datasets to load a toy audio dataset
from the Hugging Face Hub:
```bash
pip install --upgrade pip
pip install --upgrade transformers accelerate datasets[audio]
```
### Short-Form Transcription
The model can be used with the [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline)
class to transcribe short-form audio files (< 30-seconds) as follows:
```python
import torch
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
from datasets import load_dataset
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model_id = "eustlb/distil-large-v3-fr"
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
)
model.to(device)
processor = AutoProcessor.from_pretrained(model_id)
pipe = pipeline(
"automatic-speech-recognition",
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
max_new_tokens=128,
torch_dtype=torch_dtype,
device=device,
)
dataset = load_dataset("google/fleurs", "fr_fr", split="train", streaming=True)
sample = next(iter(dataset))["audio"]
result = pipe(sample)
print(result["text"])
```
To transcribe a local audio file, simply pass the path to your audio file when you call the pipeline:
```diff
- result = pipe(sample)
+ result = pipe("audio.mp3")
```
For segment-level timestamps, pass the argument `return_timestamps=True` and return the `"chunks"` output:
```python
result = pipe(sample, return_timestamps=True)
print(result["chunks"])
```
<details>
<summary> For more control over the generation parameters, use the model + processor API directly: </summary>
Ad-hoc generation arguments can be passed to `model.generate`, including `num_beams` for beam-search, `return_timestamps`
for segment-level timestamps, and `prompt_ids` for prompting. See the [docstrings](https://huggingface.co/docs/transformers/en/model_doc/whisper#transformers.WhisperForConditionalGeneration.generate)
for more details.
```python
import torch
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor
from datasets import Audio, load_dataset
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model_id = "eustlb/distil-large-v3-fr"
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
)
model.to(device)
processor = AutoProcessor.from_pretrained(model_id)
dataset = load_dataset("google/fleurs", "fr_fr", split="train", streaming=True)
dataset = dataset.cast_column("audio", Audio(processor.feature_extractor.sampling_rate))
sample = next(iter(dataset))["audio"]
input_features = processor(
sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt"
).input_features
input_features = input_features.to(device, dtype=torch_dtype)
gen_kwargs = {
"max_new_tokens": 128,
"num_beams": 1,
"return_timestamps": False,
}
pred_ids = model.generate(input_features, **gen_kwargs)
pred_text = processor.batch_decode(pred_ids, skip_special_tokens=True, decode_with_timestamps=gen_kwargs["return_timestamps"])
print(pred_text)
```
</details>
### Sequential Long-Form
distil-large-v3 is compatible with OpenAI's sequential
long-form transcription algorithm. This algorithm uses a sliding window for buffered inference of long audio files (> 30-seconds),
and returns more accurate transcriptions compared to the [chunked long-form algorithm](#chunked-long-form).
The sequential long-form algorithm should be used in either of the following scenarios:
1. Transcription accuracy is the most important factor, and latency is less of a consideration
2. You are transcribing **batches** of long audio files, in which case the latency of sequential is comparable to chunked, while being up to 0.5% WER more accurate
If you are transcribing single long audio files and latency is the most important factor, you should use the chunked algorithm
described [below](#chunked-long-form). For a detailed explanation of the different algorithms, refer to Sections 5 of
the [Distil-Whisper paper](https://arxiv.org/pdf/2311.00430.pdf).
The [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline)
class can be used to transcribe long audio files with the sequential algorithm as follows:
```python
import torch
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
from datasets import load_dataset
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model_id = "eustlb/distil-large-v3-fr"
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
)
model.to(device)
processor = AutoProcessor.from_pretrained(model_id)
pipe = pipeline(
"automatic-speech-recognition",
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
max_new_tokens=128,
torch_dtype=torch_dtype,
device=device,
)
dataset = load_dataset("eustlb/french-long-form-test", split="test", streaming=True)
sample = next(iter(dataset))["audio"]
result = pipe(sample)
print(result["text"])
```
<details>
<summary> For more control over the generation parameters, use the model + processor API directly: </summary>
```python
import torch
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor
from datasets import Audio, load_dataset
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model_id = "eustlb/distil-large-v3-fr"
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
)
model.to(device)
processor = AutoProcessor.from_pretrained(model_id)
dataset = load_dataset("eustlb/french-long-form-test", split="test", streaming=True)
dataset = dataset.cast_column("audio", Audio(processor.feature_extractor.sampling_rate))
sample = next(iter(dataset))["audio"]
inputs = processor(
sample["array"],
sampling_rate=sample["sampling_rate"],
return_tensors="pt",
truncation=False,
padding="longest",
return_attention_mask=True,
)
inputs = inputs.to(device, dtype=torch_dtype)
gen_kwargs = {
"max_new_tokens": 448,
"num_beams": 1,
"condition_on_prev_tokens": False,
"compression_ratio_threshold": 1.35, # zlib compression ratio threshold (in token space)
"temperature": (0.0, 0.2, 0.4, 0.6, 0.8, 1.0),
"logprob_threshold": -1.0,
"no_speech_threshold": 0.6,
"return_timestamps": True,
}
pred_ids = model.generate(**inputs, **gen_kwargs)
pred_text = processor.batch_decode(pred_ids, skip_special_tokens=True, decode_with_timestamps=False)
print(pred_text)
```
</details>
### Chunked Long-Form
distil-large-v3-fr remains compatible with the Transformers chunked long-form algorithm. This algorithm should be used when
a single large audio file is being transcribed and the fastest possible inference is required. In such circumstances,
the chunked algorithm is up to 9x faster than OpenAI's sequential long-form implementation (see Table 7 of the
[Distil-Whisper paper](https://arxiv.org/pdf/2311.00430.pdf)).
To enable chunking, pass the `chunk_length_s` parameter to the `pipeline`. For distil-large-v3-fr, a chunk length of 25-seconds
is optimal. To activate batching over long audio files, pass the argument `batch_size`:
```python
import torch
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
from datasets import load_dataset
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model_id = "eustlb/distil-large-v3-fr"
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
)
model.to(device)
processor = AutoProcessor.from_pretrained(model_id)
pipe = pipeline(
"automatic-speech-recognition",
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
max_new_tokens=128,
chunk_length_s=25,
batch_size=16,
torch_dtype=torch_dtype,
device=device,
)
dataset = load_dataset("eustlb/french-long-form-test", split="test", streaming=True)
sample = next(iter(dataset))["audio"]
result = pipe(sample)
print(result["text"])
```
### Speculative Decoding
distil-large-v3 is the first Distil-Whisper model that can be used as an assistant to Whisper large-v3 for [speculative decoding](https://huggingface.co/blog/whisper-speculative-decoding).
Speculative decoding mathematically ensures that exactly the same outputs as Whisper are obtained, while being 2 times faster.
This makes it the perfect drop-in replacement for existing Whisper pipelines, since the same outputs are guaranteed.
In the following code-snippet, we load the assistant Distil-Whisper model standalone to the main Whisper pipeline. We then
specify it as the "assistant model" for generation:
```python
from transformers import pipeline, AutoModelForCausalLM, AutoModelForSpeechSeq2Seq, AutoProcessor
import torch
from datasets import load_dataset
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
assistant_model_id = "eustlb/distil-large-v3-fr"
assistant_model = AutoModelForCausalLM.from_pretrained(
assistant_model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
)
assistant_model.to(device)
model_id = "openai/whisper-large-v3"
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
)
model.to(device)
processor = AutoProcessor.from_pretrained(model_id)
pipe = pipeline(
"automatic-speech-recognition",
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
max_new_tokens=128,
generate_kwargs={"assistant_model": assistant_model},
torch_dtype=torch_dtype,
device=device,
)
dataset = load_dataset("google/fleurs", "fr_fr", split="train", streaming=True)
sample = next(iter(dataset))["audio"]
result = pipe(sample)
print(result["text"])
```
For more details on speculative decoding, refer to the blog post [Speculative Decoding for 2x Faster Whisper Inference](https://huggingface.co/blog/whisper-speculative-decoding).
### Additional Speed & Memory Improvements
You can apply additional speed and memory improvements to Distil-Whisper to further reduce the inference speed and VRAM
requirements. These optimisations primarily target the attention kernel, swapping it from an eager implementation to a
more efficient flash attention version.
#### Flash Attention 2
We recommend using [Flash-Attention 2](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#flashattention-2)
if your GPU allows for it. To do so, you first need to install [Flash Attention](https://github.com/Dao-AILab/flash-attention):
```
pip install flash-attn --no-build-isolation
```
Then pass `attn_implementation="flash_attention_2"` to `from_pretrained`:
```diff
- model = AutoModelForSpeechSeq2Seq.from_pretrained(model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True)
+ model = AutoModelForSpeechSeq2Seq.from_pretrained(model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True, attn_implementation="flash_attention_2")
```
#### Torch Scale-Product-Attention (SDPA)
If your GPU does not support Flash Attention, we recommend making use of PyTorch [scaled dot-product attention (SDPA)](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html).
This attention implementation is activated **by default** for PyTorch versions 2.1.1 or greater. To check
whether you have a compatible PyTorch version, run the following Python code snippet:
```python
from transformers.utils import is_torch_sdpa_available
print(is_torch_sdpa_available())
```
If the above returns `True`, you have a valid version of PyTorch installed and SDPA is activated by default. If it
returns `False`, you need to upgrade your PyTorch version according to the [official instructions](https://pytorch.org/get-started/locally/)
Once a valid PyTorch version is installed, SDPA is activated by default. It can also be set explicitly by specifying
`attn_implementation="sdpa"` as follows:
```diff
- model = AutoModelForSpeechSeq2Seq.from_pretrained(model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True)
+ model = AutoModelForSpeechSeq2Seq.from_pretrained(model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True, attn_implementation="sdpa")
```
For more information about how to use the SDPA refer to the [Transformers SDPA documentation](https://huggingface.co/docs/transformers/en/perf_infer_gpu_one#pytorch-scaled-dot-product-attention).
#### Torch compile
Coming soon...
#### 4-bit and 8-bit Inference
Coming soon...
## Library Integrations
### Whisper.cpp
distil-large-v3-fr can be run with the [Whisper.cpp](https://github.com/ggerganov/whisper.cpp) package with the original
sequential long-form transcription algorithm. In a provisional benchmark on Mac M1, distil-large-v3 is over 5x faster
than Whisper large-v3, while performing to within 0.8% WER over long-form audio.
Steps for getting started:
1. Clone the Whisper.cpp repository:
```bash
git clone https://github.com/ggerganov/whisper.cpp.git
cd whisper.cpp
```
2. Install the Hugging Face Hub Python package:
```bash
pip install --upgrade huggingface_hub
```
And download the GGML weights for distil-large-v3 using the following Python snippet:
```python
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id='eustlb/distil-large-v3-fr-ggml', filename='ggml-distil-large-v3-fr.bin', local_dir='./models')
```
Note that if you do not have a Python environment set-up, you can also download the weights directly with `wget`:
```bash
wget https://huggingface.co/eustlb/distil-large-v3-fr-ggml/resolve/main/ggml-distil-large-v3-fr.bin -P ./models
````
3. Run inference
```bash
wget https://huggingface.co/spaces/eustlb/whisper-vs-distil-whisper-fr/resolve/main/assets/example_1.wav
make -j && ./main -m models/ggml-distil-large-v3-fr.bin -f example_1.wav
```
### Transformers.js
Distil-Whisper can be run completely in your web browser with [Transformers.js](http://github.com/xenova/transformers.js):
1. Install Transformers.js from [NPM](https://www.npmjs.com/package/@xenova/transformers):
```bash
npm i @xenova/transformers
```
2. Import the library and perform inference with the pipeline API.
```js
import { pipeline } from '@xenova/transformers';
const transcriber = await pipeline('automatic-speech-recognition', 'eustlb/distil-large-v3-fr');
const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/jfk.wav';
const output = await transcriber(url);
// { text: " And so, my fellow Americans, ask not what your country can do for you. Ask what you can do for your country." }
```
Refer to the Transformers.js [docs](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.AutomaticSpeechRecognitionPipeline)
for further information.
## Model Details
### Architecture
Distil-Whisper inherits the encoder-decoder architecture from Whisper. The encoder maps a sequence of speech vector inputs to a sequence of hidden-state vectors. The decoder auto-regressively predicts text tokens, conditional on all previous tokens and the encoder hidden-states. Consequently, the encoder is only run forward once, whereas the decoder is run as many times as the number of tokens generated. In practice, this means the decoder accounts for over 90% of total inference time. Thus, to optimise for latency, the focus is on minimising the inference time of the decoder.
To distill the Whisper model, we reduce the number of decoder layers while keeping the encoder fixed. The encoder (shown in green) is entirely copied from the teacher to the student and frozen during training. The student's decoder structure is copied from [Whisper large-v3](https://huggingface.co/openai/whisper-large-v3), with the only difference being a reduction from 32 to 2 decoder layers. These layers are initialized from distil-large-v3 to leverage language transfer from English to French (more details [here](https://github.com/huggingface/distil-whisper/tree/main/training#22-language-transfer)).
### Training
#### Data
distil-large-v3-fr is trained on 4,515 hours of audio data from three open-source, permissively licensed speech datasets on the
Hugging Face Hub:
| Dataset | Size / h | Speakers | Domain | Licence |
| --------------------------------------------------------------------------------------------- | -------- | -------- | ------------------ | ----------------------------------------------------------- |
| [Common Voice 17](https://huggingface.co/datasets/mozilla-foundation/common_voice_17_0) | 1,014 | unknown | Narrated Wikipedia | [CC0-1.0](https://choosealicense.com/licenses/cc0-1.0/) |
| [MultiLingual LibriSpeech](https://huggingface.co/datasets/facebook/multilingual_librispeech) | 1,077 | 142 | Audiobook | [CC-BY-4.0](https://choosealicense.com/licenses/cc-by-4.0/) |
| [YODAS fr000 split](https://huggingface.co/datasets/espnet/yodas) | 2,424 | unknown | YouTube | [CC-BY-3.0](https://creativecommons.org/licenses/by/3.0/) |
| **Total** | 4,515 | 142+ | | |
The audio data is then pseudo-labelled using the Whisper large-v3 model: we use Whisper to generate predictions for all
the audio in our training set and use these as the target labels during training. Using pseudo-labels ensures that the
transcriptions are consistently formatted across datasets and provides sequence-level distillation signal during training.
#### WER Filter
The Whisper pseudo-label predictions are subject to mis-transcriptions and hallucinations. To ensure we only train on
accurate pseudo-labels, we employ a simple WER heuristic during training. First, we normalise the Whisper pseudo-labels
and the ground truth labels provided by each dataset. We then compute the WER between these labels. If the WER exceeds
a specified threshold, we discard the training example. Otherwise, we keep it for training.
We chose for this training a WER threshold of 20%, resulting in an **effective training set of 2110 hours** (750 for [Common Voice 17](https://huggingface.co/datasets/mozilla-foundation/common_voice_17_0), 1040 for [MultiLingual LibriSpeech](https://huggingface.co/datasets/facebook/multilingual_librispeech) and 320 for [YODAS fr000 split](https://huggingface.co/datasets/espnet/yodas)).
Section 9.2 of the [Distil-Whisper paper](https://arxiv.org/abs/2311.00430) demonstrates the effectiveness of this filter
for improving downstream performance of the distilled model. We also partially attribute Distil-Whisper's robustness to
hallucinations to this filter.
#### Training procedure
The model was trained for 18,000 optimisation steps (or 14 epochs) with batch size 256. We saved the best model, based on the global wer score on validation splits, reached after 14,000 optimization steps (or 11 epochs). See the [Distil-Whisper paper](https://arxiv.org/abs/2311.00430) for more details (training objective, etc).
## Results
The distilled model performs to within 1% WER of Whisper large-v3 on out-of-distribution (Voxpopuli, Fleurs) short-form audio and within
2.5% WER on out-of-distribuion sequential long-form decoding.
### Evaluation methodology
The model has been tested for both in-distribution (Common Voice 17 and Multilingual Librispeech) and out-of-distribution (Fleurs, Voxpopuli, custom [long-form test set](https://huggingface.co/datasets/speech-recognition-community-v2/dev_data)) short-form and long-form transcription performances. Models have been evaluated with SDPA, float32 and batch size 32.
**Short-form evaluations** are conducted on the four given datasets by first applying a filter to exclude samples longer than 30 seconds.
**Long-form evaluation** is conducted on a custom out-of-distribution [long-form test set](https://huggingface.co/datasets/eustlb/french-long-form-test) using OpenAI's sequential long-form transcription algorithm (see [Sequential Long-Form](#sequential-long-form) section) with long-form generation parameters that can be found [here](https://github.com/huggingface/distil-whisper/blob/a5ed489ba6edb405ecef334ba0feec1bdca7a948/training/run_eval.py#L670C5-L676C6).
### Short-Form
| Model | Common Voice 17 | Multilingual Librispeech | voxpopuli | fleurs | RTFx |
| :--------------------- | :-------------: | :----------------------: | :--------: | :-------: | :---------: |
| whisper-tiny | 57.141 | 38.049 | 32.346 | 47.4 | 265.226 |
| whisper-base | 42.58 | 25.235 | 26.701 | 27.773 | 237.195 |
| whisper-small | 22.56 | 13.576 | 14.486 | 14.165 | 196.932 |
| whisper-medium | 15.51 | 9.541 | 11.836 | 9.992 | 93.428 |
| whisper-large-v3 | 11.038 | 4.762 | 9.83 | 5.624 | 62.845 |
| **distil-large-v3-fr** | **12.675** | **5.865** | **10.832** | **7.989** | **106.291** |
*the above datasets correspond to test splits
*RTFx = 1 / RTF, where RTF is the [Real Time Factor](https://openvoice-tech.net/wiki/Real-time-factor). To be interpreted as audio processed (in seconds) per second of processing.
### Long-Form
| Model Name | RTFx | [long-form test set](https://huggingface.co/datasets/eustlb/french-long-form-test) |
| :--------------------: | :---------: | :--------------------------------------------------------------------------------: |
| whisper-tiny | 121.389 | 28.158 |
| whisper-base | 109.366 | 18.665 |
| whisper-small | 83.049 | 12.557 |
| whisper-medium | 47.807 | 11.023 |
| whisper-large-v3 | 38.294 | 9.008 |
| **distil-large-v3-fr** | **101.326** | **11.13** |
### Inference speed
Reported latencies were benchmarked on a 40GB nvidia A100, generating 128 tokens with SDPA, bfloat16, 3 warmup steps, 5 measures, one beam.
The benchmarking script can be found [here](https://gist.github.com/eustlb/ef06f00858cbae4d8743f5024be869ec). The time measured is the time do one forward pass of the encoder and 128 autoregressive forward passes of the decoder.
<p align="center">
<img src="https://huggingface.co/eustlb/distil-large-v3-fr/resolve/main/assets/relative_latencies.png" alt="latencies" width="100%">
</p>
## Reproducing Distil-Whisper
Training and evaluation code to reproduce Distil-Whisper is available under the Distil-Whisper repository: https://github.com/huggingface/distil-whisper/tree/main/training
## License
distil-large-v3-fr inherits the [MIT license](https://github.com/huggingface/distil-whisper/blob/main/LICENSE) from OpenAI's Whisper model.
## Citation
If you use this model, please consider citing the [Distil-Whisper paper](https://arxiv.org/abs/2311.00430):
```
@misc{gandhi2023distilwhisper,
title={Distil-Whisper: Robust Knowledge Distillation via Large-Scale Pseudo Labelling},
author={Sanchit Gandhi and Patrick von Platen and Alexander M. Rush},
year={2023},
eprint={2311.00430},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Acknowledgements
* OpenAI for the Whisper [model](https://huggingface.co/openai/whisper-large-v3), in particular Jong Wook Kim for the [original codebase](https://github.com/openai/whisper) and training discussions
* Hugging Face 🤗 [Transformers](https://github.com/huggingface/transformers) for the model integration
* [Georgi Gerganov](https://huggingface.co/ggerganov) for the Whisper cpp integration
* [Joshua Lochner](https://huggingface.co/xenova) for the Transformers.js integration
* [Vaibhav Srivastav](https://huggingface.co/reach-vb) for Distil-Whisper distribution
* [Raghav Sonavane](https://huggingface.co/rsonavane/distil-whisper-large-v2-8-ls) for an early iteration of Distil-Whisper on the LibriSpeech datasets |
henilp105/InjecAgent-Llama-2-7b-chat-hf | henilp105 | 2024-06-21T10:37:25Z | 2 | 0 | peft | [
"peft",
"pytorch",
"safetensors",
"llama",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-06-18T19:07:01Z | ---
base_model: meta-llama/Llama-2-7b-chat-hf
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
vdavidr/llama-7b-finnish-instruct-v0.2_Fi__size_52_epochs_10_2024-06-21_04-50-18_3556391 | vdavidr | 2024-06-21T10:28:51Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-21T03:26:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vdavidr/deepseek-coder-6.7b-instruct_Fi__size_52_epochs_10_2024-06-21_03-01-23_3556388 | vdavidr | 2024-06-21T10:28:05Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-21T01:27:03Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
droneambulanceproject/drone_ambulance_project_object_detection | droneambulanceproject | 2024-06-21T10:22:18Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"detr",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | object-detection | 2024-06-09T15:18:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Omaratef3221/Qwen2-0.5B-Instruct-SQL-query-generator | Omaratef3221 | 2024-06-21T10:12:23Z | 261 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"sft",
"text-to-SQL",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2-0.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-19T14:32:41Z | ---
license: apache-2.0
base_model: Qwen/Qwen2-0.5B-Instruct
tags:
- trl
- sft
- text-to-SQL
- generated_from_trainer
model-index:
- name: Qwen2-0.5B-Instruct-SQL-query-generator
results: []
---
# Qwen2-0.5B-Instruct-SQL-query-generator
This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on the [motherduckdb/duckdb-text2sql-25k](https://huggingface.co/datasets/motherduckdb/duckdb-text2sql-25k) dataset (first 10k rows).
## Model Description
The Qwen2-0.5B-Instruct-SQL-query-generator is a specialized model fine-tuned to generate SQL queries from natural language text prompts. This fine-tuning allows the model to better understand and convert text inputs into corresponding SQL queries, facilitating tasks such as data retrieval and database querying through natural language interfaces.
## Intended Uses & Limitations
### Intended Uses
- Convert natural language questions to SQL queries.
- Facilitate data retrieval from databases using natural language.
- Assist in building natural language interfaces for databases.
### Limitations
- The model is fine-tuned on a specific subset of data and may not generalize well to all SQL query formats or databases.
- It is recommended to review the generated SQL queries for accuracy and security, especially before executing them on live databases.
## Training and Evaluation Data
### Training Data
The model was fine-tuned on the [motherduckdb/duckdb-text2sql-25k](https://huggingface.co/datasets/motherduckdb/duckdb-text2sql-25k) dataset, specifically using the first 10,000 rows. This dataset includes natural language questions and their corresponding SQL queries, providing a robust foundation for training a text-to-SQL model.
### Evaluation Data
The evaluation data used for fine-tuning was a subset of the same dataset, ensuring consistency in training and evaluation metrics.
## Training Procedure
Github Code: https://github.com/omaratef3221/SQL_Query_Generator_llm/
### Training Hyperparameters
The following hyperparameters were used during training:
- `learning_rate`: 1e-4
- `train_batch_size`: 8
- `save_steps`: 1
- `logging_steps`: 500
- `num_epochs`: 5
### Training Frameworks
- Transformers: 4.39.0
- PyTorch: 2.2.0
- Datasets: 2.20.0
- Tokenizers: 0.15.2
### Training Results
During the training process, the model was periodically evaluated to ensure it was learning effectively. The specific training metrics and results were logged for further analysis.
## Model Performance
### Evaluation Metrics
- Evaluation metrics such as accuracy, precision, recall, and F1-score were used to assess the model's performance. (Specific values can be added here if available.)
## Usage
To use this model, simply load it from the Hugging Face Model Hub and provide natural language text prompts. The model will generate the corresponding SQL queries.
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("omaratef3221/Qwen2-0.5B-Instruct-SQL-query-generator")
model = AutoModelForSeq2SeqLM.from_pretrained("omaratef3221/Qwen2-0.5B-Instruct-SQL-query-generator")
inputs = tokenizer("Show me all employees with a salary greater than $100,000", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
AlekseyElygin/Phi-3-mini-4k-instruct | AlekseyElygin | 2024-06-21T10:11:02Z | 20 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-21T08:53:10Z | ---
base_model: unsloth/phi-3-mini-4k-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
---
# Uploaded model
- **Developed by:** AlekseyElygin
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Bharanidharan07/Idefics_2_8b_OG | Bharanidharan07 | 2024-06-21T10:05:21Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"idefics2",
"image-text-to-text",
"generated_from_trainer",
"base_model:HuggingFaceM4/idefics2-8b",
"base_model:finetune:HuggingFaceM4/idefics2-8b",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-06-21T06:27:30Z | ---
license: apache-2.0
base_model: HuggingFaceM4/idefics2-8b
tags:
- generated_from_trainer
model-index:
- name: idefics_2_finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# idefics_2_finetuned
This model is a fine-tuned version of [HuggingFaceM4/idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.42.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
Subsets and Splits