modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
Markimvsv/Teste | Markimvsv | 2024-02-25T12:59:06Z | 0 | 0 | asteroid | [
"asteroid",
"text-classification",
"av",
"dataset:teknium/OpenHermes-2.5",
"license:apache-2.0",
"region:us"
] | text-classification | 2024-02-25T12:57:24Z | ---
license: apache-2.0
datasets:
- teknium/OpenHermes-2.5
language:
- av
metrics:
- accuracy
library_name: asteroid
pipeline_tag: text-classification
--- |
farzanrahmani/vilt_finetuned_200 | farzanrahmani | 2024-02-25T12:58:21Z | 64 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vilt",
"visual-question-answering",
"generated_from_trainer",
"dataset:vqa",
"base_model:dandelin/vilt-b32-mlm",
"base_model:finetune:dandelin/vilt-b32-mlm",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | visual-question-answering | 2024-02-25T12:19:05Z | ---
license: apache-2.0
base_model: dandelin/vilt-b32-mlm
tags:
- generated_from_trainer
datasets:
- vqa
model-index:
- name: vilt_finetuned_200
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vilt_finetuned_200
This model is a fine-tuned version of [dandelin/vilt-b32-mlm](https://huggingface.co/dandelin/vilt-b32-mlm) on the vqa dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
Priyanshuchaudhary2425/EmotiNet | Priyanshuchaudhary2425 | 2024-02-25T12:57:23Z | 106 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-25T12:31:22Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: EmotiNet
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EmotiNet
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3224
- Accuracy: 0.9242
- Precision: 0.8830
- Recall: 0.8990
- F1: 0.8902
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.1494 | 1.0 | 1500 | 0.3036 | 0.9237 | 0.8783 | 0.9166 | 0.8921 |
| 0.1481 | 2.0 | 3000 | 0.2944 | 0.9242 | 0.8893 | 0.8867 | 0.8877 |
| 0.072 | 3.0 | 4500 | 0.3224 | 0.9242 | 0.8830 | 0.8990 | 0.8902 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Tokenizers 0.15.2
|
codesagar/prompt-guard-v1 | codesagar | 2024-02-25T12:57:09Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"autotrain",
"text-generation",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-25T12:57:04Z | ---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
jennfar/Current_Model | jennfar | 2024-02-25T12:54:55Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-02-25T12:54:55Z | ---
license: creativeml-openrail-m
---
|
zayjean/llama-2-13b_verify-bo-lora-r8-a32-d0_3K-E20 | zayjean | 2024-02-25T12:54:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-25T12:54:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ThuyNT03/SOMD-train-scibert | ThuyNT03 | 2024-02-25T12:50:32Z | 106 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:allenai/scibert_scivocab_cased",
"base_model:finetune:allenai/scibert_scivocab_cased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-02-25T08:55:33Z | ---
base_model: allenai/scibert_scivocab_cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: SOMD-train-scibert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SOMD-train-scibert
This model is a fine-tuned version of [allenai/scibert_scivocab_cased](https://huggingface.co/allenai/scibert_scivocab_cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- F1: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| No log | 1.0 | 1243 | 0.0053 | 0.7304 |
| No log | 2.0 | 2486 | 0.0029 | 0.8024 |
| No log | 3.0 | 3729 | 0.0017 | 0.8700 |
| No log | 4.0 | 4972 | 0.0011 | 0.9302 |
| No log | 5.0 | 6215 | 0.0010 | 0.9305 |
| No log | 6.0 | 7458 | 0.0008 | 0.9545 |
| No log | 7.0 | 8701 | 0.0007 | 0.9629 |
| No log | 8.0 | 9944 | 0.0004 | 0.9698 |
| No log | 9.0 | 11187 | 0.0002 | 0.9854 |
| No log | 10.0 | 12430 | 0.0002 | 0.9871 |
| No log | 11.0 | 13673 | 0.0002 | 0.9934 |
| No log | 12.0 | 14916 | 0.0001 | 0.9898 |
| No log | 13.0 | 16159 | 0.0000 | 0.9985 |
| No log | 14.0 | 17402 | 0.0001 | 0.9940 |
| No log | 15.0 | 18645 | 0.0000 | 0.9986 |
| No log | 16.0 | 19888 | 0.0000 | 0.9966 |
| No log | 17.0 | 21131 | 0.0000 | 0.9994 |
| No log | 18.0 | 22374 | 0.0000 | 1.0 |
| No log | 19.0 | 23617 | 0.0000 | 1.0 |
| No log | 20.0 | 24860 | 0.0000 | 1.0 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.1
|
ThomasFG/101.25-33.75 | ThomasFG | 2024-02-25T12:39:32Z | 76 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small.en",
"base_model:finetune:openai/whisper-small.en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-02-25T10:18:28Z | ---
license: apache-2.0
base_model: openai/whisper-small.en
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: 2024-02-25_11-18-26
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 2024-02-25_11-18-26
This model is a fine-tuned version of [openai/whisper-small.en](https://huggingface.co/openai/whisper-small.en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2631
- Wer: 9.9444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2109 | 1.0 | 612 | 0.2631 | 9.9444 |
### Framework versions
- Transformers 4.37.2
- Pytorch 1.13.1+cu116
- Datasets 2.17.0
- Tokenizers 0.15.2
|
MaziyarPanahi/Ultra-Smaug-125B-v0.1 | MaziyarPanahi | 2024-02-25T12:32:59Z | 29 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:agpl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-16T12:51:30Z | ---
license: agpl-3.0
---
pass_7
Quantized model in GGUF: https://huggingface.co/MaziyarPanahi/Ultra-Smaug-125B-v0.1-GGUF |
MaziyarPanahi/Ultra-Miqu-WizardLM-120B-v0.1 | MaziyarPanahi | 2024-02-25T12:30:03Z | 36 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"wizardlm",
"mique",
"conversational",
"license:agpl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-14T22:31:59Z | ---
license: agpl-3.0
pipeline_tag: text-generation
tags:
- merge
- wizardlm
- mique
---
## Examples
There are three killers in a room. Someone enters the room and kills one of them. Nobody leaves the room. How many killers are left in the room? Explain your reasoning step by step.
```
At the beginning, there were three killers in the room.
1. Someone entered the room, making it four people inside (including the new person).
2. The new person then killed one of the original three killers, so now there are two dead bodies in the room.
3. However, the riddle didn't say that the new person wasn't a killer themselves. In fact, they just proved to be one by killing one of the original three.
4. Therefore, after the murder, there are still three killers in the room: the new person who just committed a killing (making them a killer), and the remaining two from the original group of three.
```
Mark is faster than Mary , Mary is faster than Joe. Is Joe faster than Mark? Let's think step by step
```
1) We know that Mark is faster than Mary, so we can say that Mark's speed > Mary's speed
2) We also know that Mary is faster than Joe, so we can say that Mary's speed > Joe's speed
3) Based on the information above, we can conclude that Mark'sss speed > Mary' speed > Joe's speed.
4) Therefore, Joe cannot be faster than Mark as his speed is less than both Mark and Mary.
```
If I lay 2 wet shirts out in the sun flat on the ground to dry and it takes 5 hours until they are dry, how long would it take to dry 4 shirts of the same type that way simultanously?
```
It would still take approximately 5 hours. The number of shirts does not affect the drying time, assuming all other conditions remain constant (such as temperature, humidity, air flow, and amount of moisture).
``` |
codesagar/autotrain-jd6kz-4z28z | codesagar | 2024-02-25T12:23:25Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"autotrain",
"text-generation",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-25T12:23:20Z | ---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
ActiveLearningAGI/zephyr-7b-sft-qlora | ActiveLearningAGI | 2024-02-25T12:19:58Z | 5 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"mistral",
"alignment-handbook",
"generated_from_trainer",
"trl",
"sft",
"dataset:HuggingFaceH4/ultrachat_200k",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-02-22T09:26:57Z | ---
license: apache-2.0
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- HuggingFaceH4/ultrachat_200k
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: zephyr-7b-sft-qlora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-sft-qlora
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the HuggingFaceH4/ultrachat_200k dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9536
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9757 | 1.0 | 8714 | 0.9536 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.0 |
Litzy619/V0224P2 | Litzy619 | 2024-02-25T12:18:32Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:yahma/llama-7b-hf",
"base_model:finetune:yahma/llama-7b-hf",
"license:other",
"region:us"
] | null | 2024-02-25T08:40:59Z | ---
license: other
base_model: yahma/llama-7b-hf
tags:
- generated_from_trainer
model-index:
- name: V0224P2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0224P2
This model is a fine-tuned version of [yahma/llama-7b-hf](https://huggingface.co/yahma/llama-7b-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7339
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 20
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0572 | 0.13 | 10 | 0.9574 |
| 0.9041 | 0.26 | 20 | 0.8622 |
| 0.8373 | 0.39 | 30 | 0.8201 |
| 0.7968 | 0.52 | 40 | 0.7984 |
| 0.784 | 0.65 | 50 | 0.7833 |
| 0.7691 | 0.78 | 60 | 0.7743 |
| 0.7479 | 0.91 | 70 | 0.7653 |
| 0.735 | 1.04 | 80 | 0.7590 |
| 0.6958 | 1.17 | 90 | 0.7547 |
| 0.7106 | 1.3 | 100 | 0.7509 |
| 0.7013 | 1.43 | 110 | 0.7463 |
| 0.6914 | 1.55 | 120 | 0.7422 |
| 0.6826 | 1.68 | 130 | 0.7398 |
| 0.6926 | 1.81 | 140 | 0.7380 |
| 0.6971 | 1.94 | 150 | 0.7349 |
| 0.6674 | 2.07 | 160 | 0.7370 |
| 0.6406 | 2.2 | 170 | 0.7360 |
| 0.6539 | 2.33 | 180 | 0.7351 |
| 0.6457 | 2.46 | 190 | 0.7352 |
| 0.6487 | 2.59 | 200 | 0.7343 |
| 0.6498 | 2.72 | 210 | 0.7339 |
| 0.6483 | 2.85 | 220 | 0.7340 |
| 0.6499 | 2.98 | 230 | 0.7339 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
cl-nagoya/shioriha-large-pt | cl-nagoya | 2024-02-25T12:12:00Z | 7 | 5 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-02-13T10:04:19Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# cl-nagoya/shioriha-large-pt
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('cl-nagoya/shioriha-large-pt')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('cl-nagoya/shioriha-large-pt')
model = AutoModel.from_pretrained('cl-nagoya/shioriha-large-pt')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=cl-nagoya/shioriha-large-pt)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Bul11k/finetuned-tiny-bert_second_attempt | Bul11k | 2024-02-25T12:10:24Z | 120 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:cointegrated/rubert-tiny",
"base_model:finetune:cointegrated/rubert-tiny",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-02-25T11:49:45Z | ---
license: mit
base_model: cointegrated/rubert-tiny
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: finetuned-tiny-bert_second_attempt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-tiny-bert_second_attempt
This model is a fine-tuned version of [cointegrated/rubert-tiny](https://huggingface.co/cointegrated/rubert-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2433
- Accuracy: 0.5666
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
rorito/jug19light | rorito | 2024-02-25T12:09:32Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | 2024-02-25T12:09:32Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
a young woman, swimming cap, brown hair, green eyes, upper body, from above,
facing the viewer, floating on the water, wet, pensive, one-piece swimming
suit, realistic, skin texture, best quality
parameters:
negative_prompt: bad quality, worst quality, low quality, semi realistic, 2d, cartoon,
output:
url: images/00057-2415965909.jpeg.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: null
---
# jug
<Gallery />
## Download model
[Download](/rorito/jug19light/tree/main) them in the Files & versions tab.
|
Jayantez/ppo-LunarLander-v2 | Jayantez | 2024-02-25T11:51:43Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-25T11:51:25Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 265.25 +/- 16.17
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
lvcalucioli/llamantino7b_2_2_syntetic-question-answering_merged | lvcalucioli | 2024-02-25T11:50:09Z | 76 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-02-25T11:42:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rorito/jugxl9 | rorito | 2024-02-25T11:39:49Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | 2024-02-25T11:39:49Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
a young woman, swimming cap, brown hair, green eyes, upper body, from above,
facing the viewer, floating on the water, wet, pensive, one-piece swimming
suit, realistic, skin texture, best quality
parameters:
negative_prompt: bad quality, worst quality, low quality, semi realistic, 2d, cartoon,
output:
url: images/00057-2415965909.jpeg.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: null
---
# jugger
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/rorito/jugxl9/tree/main) them in the Files & versions tab.
|
ruige2002/finetuning-sentiment-model-500-hand-labelled-cashtag-tweets-distilbert-base-uncased | ruige2002 | 2024-02-25T11:26:05Z | 106 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-25T11:20:48Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-500-hand-labelled-cashtag-tweets-distilbert-base-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-500-hand-labelled-cashtag-tweets-distilbert-base-uncased
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1248
- Accuracy: 0.9167
- F1: 0.9556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
Warkawik/code_example | Warkawik | 2024-02-25T11:20:26Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:bigcode/starcoderbase-1b",
"base_model:adapter:bigcode/starcoderbase-1b",
"license:bigcode-openrail-m",
"region:us"
] | null | 2024-02-25T06:12:55Z | ---
license: bigcode-openrail-m
library_name: peft
tags:
- generated_from_trainer
base_model: bigcode/starcoderbase-1b
model-index:
- name: code_example
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# code_example
This model is a fine-tuned version of [bigcode/starcoderbase-1b](https://huggingface.co/bigcode/starcoderbase-1b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0596
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 30
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9662 | 0.05 | 100 | 0.9184 |
| 0.9899 | 0.1 | 200 | 0.9461 |
| 0.6517 | 0.15 | 300 | 0.9698 |
| 0.8963 | 0.2 | 400 | 0.9823 |
| 0.9498 | 0.25 | 500 | 0.9727 |
| 0.5741 | 0.3 | 600 | 1.0098 |
| 0.7985 | 0.35 | 700 | 1.0212 |
| 0.8268 | 0.4 | 800 | 1.0123 |
| 0.5209 | 0.45 | 900 | 1.0178 |
| 0.7512 | 0.5 | 1000 | 1.0302 |
| 0.7718 | 0.55 | 1100 | 1.0342 |
| 0.4746 | 0.6 | 1200 | 1.0492 |
| 0.6964 | 0.65 | 1300 | 1.0394 |
| 0.6844 | 0.7 | 1400 | 1.0471 |
| 0.5396 | 0.75 | 1500 | 1.0495 |
| 0.6569 | 0.8 | 1600 | 1.0553 |
| 0.6005 | 0.85 | 1700 | 1.0609 |
| 0.6015 | 0.9 | 1800 | 1.0632 |
| 0.5552 | 0.95 | 1900 | 1.0620 |
| 0.5883 | 1.0 | 2000 | 1.0596 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 |
fzzhang/mistral_gsm8k_tuneSfull | fzzhang | 2024-02-25T11:16:47Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-02-25T03:34:51Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: mistral_gsm8k_tuneSfull
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_gsm8k_tuneSfull
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.37.0.dev0
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.0 |
Shariar433/bert-finetuned-ner-bangla | Shariar433 | 2024-02-25T11:10:46Z | 112 | 0 | transformers | [
"transformers",
"safetensors",
"electra",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-02-24T18:40:11Z | ---
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-ner-bangla
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-bangla
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1048
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1194 | 0.84 | 500 | 0.1120 |
| 0.1027 | 1.68 | 1000 | 0.1048 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.1+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1 |
OmarAlsaabi/gemma-7b-it-q4_k_m | OmarAlsaabi | 2024-02-25T10:53:14Z | 1 | 0 | null | [
"gguf",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-02-25T10:42:07Z | ---
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
---
# Gemma-7B-it GGUF Quantized
## Usage
This model can be used with the latest version of llama.cpp and LM Studio >0.2.16. |
Dagonez/DialoGPT-small-Barney-Bot | Dagonez | 2024-02-25T10:47:19Z | 45 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"chatbot",
"conversational",
"text2text-generation",
"en",
"base_model:microsoft/DialoGPT-small",
"base_model:finetune:microsoft/DialoGPT-small",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-16T19:45:21Z | ---
language:
- en
library_name: transformers
pipeline_tag: text2text-generation
tags:
- chatbot
- conversational
inference: True
base_model: "microsoft/DialoGPT-small"
--- |
zayjean/llama-2-13b_draft-bo-lora-r8-a32-d0_3K-E20 | zayjean | 2024-02-25T10:35:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-25T10:35:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
woody72/albert-no-variable-items-length | woody72 | 2024-02-25T10:34:44Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"base_model:deepseek-ai/deepseek-math-7b-base",
"base_model:finetune:deepseek-ai/deepseek-math-7b-base",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-25T08:29:47Z | ---
license: other
base_model: deepseek-ai/deepseek-math-7b-base
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: albert-no-variable-items-length
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-no-variable-items-length
This model is a fine-tuned version of [deepseek-ai/deepseek-math-7b-base](https://huggingface.co/deepseek-ai/deepseek-math-7b-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0201
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7887 | 0.01 | 1 | 0.3267 |
| 0.8195 | 0.01 | 2 | 0.3220 |
| 0.7704 | 0.02 | 3 | 0.3179 |
| 0.7745 | 0.03 | 4 | 0.3148 |
| 0.7651 | 0.03 | 5 | 0.3115 |
| 0.7484 | 0.04 | 6 | 0.3085 |
| 0.7621 | 0.04 | 7 | 0.3065 |
| 0.7409 | 0.05 | 8 | 0.3033 |
| 0.7138 | 0.06 | 9 | 0.3003 |
| 0.7204 | 0.06 | 10 | 0.2985 |
| 0.6912 | 0.07 | 11 | 0.2952 |
| 0.7163 | 0.08 | 12 | 0.2922 |
| 0.6911 | 0.08 | 13 | 0.2901 |
| 0.6774 | 0.09 | 14 | 0.2878 |
| 0.6692 | 0.1 | 15 | 0.2851 |
| 0.6055 | 0.1 | 16 | 0.2825 |
| 0.4833 | 0.11 | 17 | 0.2801 |
| 0.4824 | 0.12 | 18 | 0.2765 |
| 0.4753 | 0.12 | 19 | 0.2752 |
| 0.4653 | 0.13 | 20 | 0.2722 |
| 0.466 | 0.13 | 21 | 0.2696 |
| 0.4599 | 0.14 | 22 | 0.2685 |
| 0.4484 | 0.15 | 23 | 0.2647 |
| 0.4508 | 0.15 | 24 | 0.2615 |
| 0.4436 | 0.16 | 25 | 0.2594 |
| 0.4526 | 0.17 | 26 | 0.2565 |
| 0.4332 | 0.17 | 27 | 0.2551 |
| 0.4246 | 0.18 | 28 | 0.2515 |
| 0.4241 | 0.19 | 29 | 0.2480 |
| 0.4173 | 0.19 | 30 | 0.2453 |
| 0.4031 | 0.2 | 31 | 0.2435 |
| 0.4122 | 0.2 | 32 | 0.2400 |
| 0.408 | 0.21 | 33 | 0.2386 |
| 0.3971 | 0.22 | 34 | 0.2361 |
| 0.4002 | 0.22 | 35 | 0.2337 |
| 0.3881 | 0.23 | 36 | 0.2310 |
| 0.3965 | 0.24 | 37 | 0.2272 |
| 0.3731 | 0.24 | 38 | 0.2245 |
| 0.3743 | 0.25 | 39 | 0.2211 |
| 0.3625 | 0.26 | 40 | 0.2191 |
| 0.3619 | 0.26 | 41 | 0.2167 |
| 0.3557 | 0.27 | 42 | 0.2149 |
| 0.3539 | 0.28 | 43 | 0.2107 |
| 0.3367 | 0.28 | 44 | 0.2089 |
| 0.3427 | 0.29 | 45 | 0.2061 |
| 0.333 | 0.29 | 46 | 0.2026 |
| 0.308 | 0.3 | 47 | 0.1996 |
| 0.2572 | 0.31 | 48 | 0.1969 |
| 0.2568 | 0.31 | 49 | 0.1934 |
| 0.2452 | 0.32 | 50 | 0.1915 |
| 0.2406 | 0.33 | 51 | 0.1874 |
| 0.2395 | 0.33 | 52 | 0.1834 |
| 0.2337 | 0.34 | 53 | 0.1805 |
| 0.2223 | 0.35 | 54 | 0.1767 |
| 0.2295 | 0.35 | 55 | 0.1732 |
| 0.222 | 0.36 | 56 | 0.1692 |
| 0.2174 | 0.36 | 57 | 0.1653 |
| 0.2064 | 0.37 | 58 | 0.1637 |
| 0.2075 | 0.38 | 59 | 0.1591 |
| 0.2014 | 0.38 | 60 | 0.1559 |
| 0.1963 | 0.39 | 61 | 0.1519 |
| 0.1908 | 0.4 | 62 | 0.1485 |
| 0.1963 | 0.4 | 63 | 0.1445 |
| 0.1793 | 0.41 | 64 | 0.1408 |
| 0.179 | 0.42 | 65 | 0.1377 |
| 0.1645 | 0.42 | 66 | 0.1334 |
| 0.1651 | 0.43 | 67 | 0.1299 |
| 0.1632 | 0.44 | 68 | 0.1264 |
| 0.1549 | 0.44 | 69 | 0.1224 |
| 0.1479 | 0.45 | 70 | 0.1186 |
| 0.1491 | 0.45 | 71 | 0.1153 |
| 0.1397 | 0.46 | 72 | 0.1109 |
| 0.1384 | 0.47 | 73 | 0.1069 |
| 0.1308 | 0.47 | 74 | 0.1025 |
| 0.1144 | 0.48 | 75 | 0.0987 |
| 0.1191 | 0.49 | 76 | 0.0946 |
| 0.1132 | 0.49 | 77 | 0.0910 |
| 0.1057 | 0.5 | 78 | 0.0867 |
| 0.0911 | 0.51 | 79 | 0.0841 |
| 0.1005 | 0.51 | 80 | 0.0812 |
| 0.0885 | 0.52 | 81 | 0.0790 |
| 0.091 | 0.52 | 82 | 0.0766 |
| 0.0841 | 0.53 | 83 | 0.0742 |
| 0.0808 | 0.54 | 84 | 0.0728 |
| 0.0752 | 0.54 | 85 | 0.0708 |
| 0.0717 | 0.55 | 86 | 0.0690 |
| 0.0767 | 0.56 | 87 | 0.0676 |
| 0.069 | 0.56 | 88 | 0.0658 |
| 0.0721 | 0.57 | 89 | 0.0643 |
| 0.074 | 0.58 | 90 | 0.0635 |
| 0.0633 | 0.58 | 91 | 0.0621 |
| 0.0706 | 0.59 | 92 | 0.0609 |
| 0.0575 | 0.6 | 93 | 0.0587 |
| 0.0641 | 0.6 | 94 | 0.0584 |
| 0.0574 | 0.61 | 95 | 0.0572 |
| 0.0658 | 0.61 | 96 | 0.0567 |
| 0.0696 | 0.62 | 97 | 0.0553 |
| 0.0603 | 0.63 | 98 | 0.0544 |
| 0.0571 | 0.63 | 99 | 0.0537 |
| 0.057 | 0.64 | 100 | 0.0527 |
| 0.0569 | 0.65 | 101 | 0.0514 |
| 0.0521 | 0.65 | 102 | 0.0511 |
| 0.052 | 0.66 | 103 | 0.0501 |
| 0.0529 | 0.67 | 104 | 0.0490 |
| 0.0523 | 0.67 | 105 | 0.0490 |
| 0.0567 | 0.68 | 106 | 0.0480 |
| 0.0547 | 0.68 | 107 | 0.0475 |
| 0.0509 | 0.69 | 108 | 0.0468 |
| 0.0506 | 0.7 | 109 | 0.0462 |
| 0.0533 | 0.7 | 110 | 0.0458 |
| 0.0544 | 0.71 | 111 | 0.0449 |
| 0.0521 | 0.72 | 112 | 0.0442 |
| 0.0527 | 0.72 | 113 | 0.0439 |
| 0.0495 | 0.73 | 114 | 0.0436 |
| 0.0471 | 0.74 | 115 | 0.0430 |
| 0.0442 | 0.74 | 116 | 0.0424 |
| 0.0432 | 0.75 | 117 | 0.0421 |
| 0.0451 | 0.76 | 118 | 0.0420 |
| 0.0498 | 0.76 | 119 | 0.0409 |
| 0.0468 | 0.77 | 120 | 0.0409 |
| 0.0474 | 0.77 | 121 | 0.0403 |
| 0.0462 | 0.78 | 122 | 0.0399 |
| 0.037 | 0.79 | 123 | 0.0397 |
| 0.041 | 0.79 | 124 | 0.0393 |
| 0.045 | 0.8 | 125 | 0.0387 |
| 0.045 | 0.81 | 126 | 0.0393 |
| 0.0416 | 0.81 | 127 | 0.0381 |
| 0.0418 | 0.82 | 128 | 0.0383 |
| 0.0382 | 0.83 | 129 | 0.0377 |
| 0.0444 | 0.83 | 130 | 0.0376 |
| 0.0404 | 0.84 | 131 | 0.0373 |
| 0.0438 | 0.84 | 132 | 0.0369 |
| 0.0411 | 0.85 | 133 | 0.0365 |
| 0.041 | 0.86 | 134 | 0.0364 |
| 0.0425 | 0.86 | 135 | 0.0362 |
| 0.0418 | 0.87 | 136 | 0.0356 |
| 0.0407 | 0.88 | 137 | 0.0358 |
| 0.04 | 0.88 | 138 | 0.0362 |
| 0.0382 | 0.89 | 139 | 0.0358 |
| 0.0415 | 0.9 | 140 | 0.0351 |
| 0.0374 | 0.9 | 141 | 0.0353 |
| 0.0377 | 0.91 | 142 | 0.0350 |
| 0.0368 | 0.92 | 143 | 0.0348 |
| 0.0389 | 0.92 | 144 | 0.0348 |
| 0.035 | 0.93 | 145 | 0.0344 |
| 0.0377 | 0.93 | 146 | 0.0346 |
| 0.0394 | 0.94 | 147 | 0.0344 |
| 0.0349 | 0.95 | 148 | 0.0344 |
| 0.0379 | 0.95 | 149 | 0.0341 |
| 0.0336 | 0.96 | 150 | 0.0340 |
| 0.037 | 0.97 | 151 | 0.0338 |
| 0.0338 | 0.97 | 152 | 0.0337 |
| 0.0384 | 0.98 | 153 | 0.0337 |
| 0.0385 | 0.99 | 154 | 0.0333 |
| 0.0345 | 0.99 | 155 | 0.0333 |
| 0.0354 | 1.0 | 156 | 0.0330 |
| 0.0359 | 1.0 | 157 | 0.0324 |
| 0.0372 | 1.01 | 158 | 0.0328 |
| 0.0337 | 1.02 | 159 | 0.0321 |
| 0.0344 | 1.02 | 160 | 0.0322 |
| 0.0351 | 1.03 | 161 | 0.0319 |
| 0.0324 | 1.04 | 162 | 0.0324 |
| 0.034 | 1.04 | 163 | 0.0320 |
| 0.0287 | 1.05 | 164 | 0.0321 |
| 0.03 | 1.06 | 165 | 0.0320 |
| 0.0314 | 1.06 | 166 | 0.0319 |
| 0.0275 | 1.07 | 167 | 0.0315 |
| 0.0213 | 1.08 | 168 | 0.0321 |
| 0.0277 | 1.08 | 169 | 0.0321 |
| 0.0275 | 1.09 | 170 | 0.0319 |
| 0.0182 | 1.09 | 171 | 0.0320 |
| 0.0236 | 1.1 | 172 | 0.0319 |
| 0.0319 | 1.11 | 173 | 0.0312 |
| 0.0358 | 1.11 | 174 | 0.0319 |
| 0.0328 | 1.12 | 175 | 0.0319 |
| 0.0337 | 1.13 | 176 | 0.0318 |
| 0.0382 | 1.13 | 177 | 0.0319 |
| 0.0315 | 1.14 | 178 | 0.0312 |
| 0.0308 | 1.15 | 179 | 0.0320 |
| 0.0314 | 1.15 | 180 | 0.0325 |
| 0.0312 | 1.16 | 181 | 0.0324 |
| 0.0326 | 1.16 | 182 | 0.0327 |
| 0.0278 | 1.17 | 183 | 0.0332 |
| 0.0264 | 1.18 | 184 | 0.0329 |
| 0.0307 | 1.18 | 185 | 0.0334 |
| 0.0255 | 1.19 | 186 | 0.0336 |
| 0.0284 | 1.2 | 187 | 0.0338 |
| 0.029 | 1.2 | 188 | 0.0342 |
| 0.0299 | 1.21 | 189 | 0.0343 |
| 0.0317 | 1.22 | 190 | 0.0342 |
| 0.0319 | 1.22 | 191 | 0.0345 |
| 0.0281 | 1.23 | 192 | 0.0344 |
| 0.0293 | 1.24 | 193 | 0.0341 |
| 0.0256 | 1.24 | 194 | 0.0344 |
| 0.0253 | 1.25 | 195 | 0.0345 |
| 0.0255 | 1.25 | 196 | 0.0344 |
| 0.0238 | 1.26 | 197 | 0.0343 |
| 0.0247 | 1.27 | 198 | 0.0338 |
| 0.0239 | 1.27 | 199 | 0.0344 |
| 0.0248 | 1.28 | 200 | 0.0345 |
| 0.0253 | 1.29 | 201 | 0.0343 |
| 0.0252 | 1.29 | 202 | 0.0340 |
| 0.0275 | 1.3 | 203 | 0.0340 |
| 0.0376 | 1.31 | 204 | 0.0336 |
| 0.0374 | 1.31 | 205 | 0.0333 |
| 0.0397 | 1.32 | 206 | 0.0331 |
| 0.0361 | 1.32 | 207 | 0.0322 |
| 0.0402 | 1.33 | 208 | 0.0319 |
| 0.0307 | 1.34 | 209 | 0.0317 |
| 0.0305 | 1.34 | 210 | 0.0309 |
| 0.0285 | 1.35 | 211 | 0.0307 |
| 0.0301 | 1.36 | 212 | 0.0307 |
| 0.0298 | 1.36 | 213 | 0.0306 |
| 0.0278 | 1.37 | 214 | 0.0305 |
| 0.0283 | 1.38 | 215 | 0.0303 |
| 0.0311 | 1.38 | 216 | 0.0304 |
| 0.0314 | 1.39 | 217 | 0.0306 |
| 0.0301 | 1.4 | 218 | 0.0304 |
| 0.0301 | 1.4 | 219 | 0.0303 |
| 0.0297 | 1.41 | 220 | 0.0299 |
| 0.0298 | 1.41 | 221 | 0.0300 |
| 0.0316 | 1.42 | 222 | 0.0299 |
| 0.0258 | 1.43 | 223 | 0.0296 |
| 0.0297 | 1.43 | 224 | 0.0297 |
| 0.0307 | 1.44 | 225 | 0.0289 |
| 0.0256 | 1.45 | 226 | 0.0285 |
| 0.0291 | 1.45 | 227 | 0.0285 |
| 0.0295 | 1.46 | 228 | 0.0286 |
| 0.0263 | 1.47 | 229 | 0.0283 |
| 0.0301 | 1.47 | 230 | 0.0284 |
| 0.0289 | 1.48 | 231 | 0.0285 |
| 0.0272 | 1.48 | 232 | 0.0286 |
| 0.0297 | 1.49 | 233 | 0.0286 |
| 0.0261 | 1.5 | 234 | 0.0286 |
| 0.0254 | 1.5 | 235 | 0.0286 |
| 0.0298 | 1.51 | 236 | 0.0284 |
| 0.0329 | 1.52 | 237 | 0.0278 |
| 0.0325 | 1.52 | 238 | 0.0281 |
| 0.0297 | 1.53 | 239 | 0.0280 |
| 0.0274 | 1.54 | 240 | 0.0281 |
| 0.0291 | 1.54 | 241 | 0.0277 |
| 0.0271 | 1.55 | 242 | 0.0279 |
| 0.0283 | 1.56 | 243 | 0.0278 |
| 0.0258 | 1.56 | 244 | 0.0277 |
| 0.0271 | 1.57 | 245 | 0.0276 |
| 0.0279 | 1.57 | 246 | 0.0273 |
| 0.0282 | 1.58 | 247 | 0.0274 |
| 0.0286 | 1.59 | 248 | 0.0272 |
| 0.0248 | 1.59 | 249 | 0.0268 |
| 0.0268 | 1.6 | 250 | 0.0272 |
| 0.0239 | 1.61 | 251 | 0.0271 |
| 0.0321 | 1.61 | 252 | 0.0268 |
| 0.0305 | 1.62 | 253 | 0.0266 |
| 0.0307 | 1.63 | 254 | 0.0263 |
| 0.0245 | 1.63 | 255 | 0.0266 |
| 0.0261 | 1.64 | 256 | 0.0268 |
| 0.0264 | 1.64 | 257 | 0.0262 |
| 0.0268 | 1.65 | 258 | 0.0264 |
| 0.0253 | 1.66 | 259 | 0.0261 |
| 0.0267 | 1.66 | 260 | 0.0261 |
| 0.0276 | 1.67 | 261 | 0.0262 |
| 0.0269 | 1.68 | 262 | 0.0260 |
| 0.0265 | 1.68 | 263 | 0.0262 |
| 0.0267 | 1.69 | 264 | 0.0262 |
| 0.0256 | 1.7 | 265 | 0.0260 |
| 0.0285 | 1.7 | 266 | 0.0257 |
| 0.0305 | 1.71 | 267 | 0.0259 |
| 0.0302 | 1.72 | 268 | 0.0262 |
| 0.0294 | 1.72 | 269 | 0.0258 |
| 0.0295 | 1.73 | 270 | 0.0255 |
| 0.027 | 1.73 | 271 | 0.0255 |
| 0.0276 | 1.74 | 272 | 0.0256 |
| 0.0256 | 1.75 | 273 | 0.0257 |
| 0.0248 | 1.75 | 274 | 0.0257 |
| 0.0305 | 1.76 | 275 | 0.0253 |
| 0.029 | 1.77 | 276 | 0.0253 |
| 0.0296 | 1.77 | 277 | 0.0251 |
| 0.0307 | 1.78 | 278 | 0.0250 |
| 0.0247 | 1.79 | 279 | 0.0250 |
| 0.0235 | 1.79 | 280 | 0.0250 |
| 0.0267 | 1.8 | 281 | 0.0250 |
| 0.0278 | 1.8 | 282 | 0.0246 |
| 0.0277 | 1.81 | 283 | 0.0248 |
| 0.0266 | 1.82 | 284 | 0.0242 |
| 0.0232 | 1.82 | 285 | 0.0244 |
| 0.0288 | 1.83 | 286 | 0.0243 |
| 0.0276 | 1.84 | 287 | 0.0243 |
| 0.0295 | 1.84 | 288 | 0.0241 |
| 0.0253 | 1.85 | 289 | 0.0240 |
| 0.0267 | 1.86 | 290 | 0.0244 |
| 0.0286 | 1.86 | 291 | 0.0242 |
| 0.0272 | 1.87 | 292 | 0.0237 |
| 0.0267 | 1.88 | 293 | 0.0240 |
| 0.0251 | 1.88 | 294 | 0.0243 |
| 0.0257 | 1.89 | 295 | 0.0239 |
| 0.0283 | 1.89 | 296 | 0.0237 |
| 0.0246 | 1.9 | 297 | 0.0238 |
| 0.0241 | 1.91 | 298 | 0.0238 |
| 0.0259 | 1.91 | 299 | 0.0241 |
| 0.0246 | 1.92 | 300 | 0.0237 |
| 0.0253 | 1.93 | 301 | 0.0239 |
| 0.0257 | 1.93 | 302 | 0.0237 |
| 0.0232 | 1.94 | 303 | 0.0238 |
| 0.0241 | 1.95 | 304 | 0.0237 |
| 0.0254 | 1.95 | 305 | 0.0237 |
| 0.0227 | 1.96 | 306 | 0.0238 |
| 0.023 | 1.96 | 307 | 0.0238 |
| 0.0232 | 1.97 | 308 | 0.0238 |
| 0.0268 | 1.98 | 309 | 0.0234 |
| 0.027 | 1.98 | 310 | 0.0237 |
| 0.0243 | 1.99 | 311 | 0.0235 |
| 0.025 | 2.0 | 312 | 0.0235 |
| 0.0185 | 2.0 | 313 | 0.0235 |
| 0.0153 | 2.01 | 314 | 0.0234 |
| 0.0119 | 2.02 | 315 | 0.0236 |
| 0.0094 | 2.02 | 316 | 0.0235 |
| 0.0171 | 2.03 | 317 | 0.0237 |
| 0.0121 | 2.04 | 318 | 0.0235 |
| 0.0135 | 2.04 | 319 | 0.0231 |
| 0.0182 | 2.05 | 320 | 0.0235 |
| 0.0128 | 2.05 | 321 | 0.0232 |
| 0.014 | 2.06 | 322 | 0.0235 |
| 0.0142 | 2.07 | 323 | 0.0237 |
| 0.0084 | 2.07 | 324 | 0.0236 |
| 0.0137 | 2.08 | 325 | 0.0235 |
| 0.0144 | 2.09 | 326 | 0.0238 |
| 0.0114 | 2.09 | 327 | 0.0237 |
| 0.0104 | 2.1 | 328 | 0.0239 |
| 0.0205 | 2.11 | 329 | 0.0234 |
| 0.0234 | 2.11 | 330 | 0.0233 |
| 0.0227 | 2.12 | 331 | 0.0237 |
| 0.0217 | 2.12 | 332 | 0.0235 |
| 0.025 | 2.13 | 333 | 0.0237 |
| 0.0208 | 2.14 | 334 | 0.0245 |
| 0.0192 | 2.14 | 335 | 0.0245 |
| 0.0195 | 2.15 | 336 | 0.0249 |
| 0.0203 | 2.16 | 337 | 0.0253 |
| 0.0234 | 2.16 | 338 | 0.0252 |
| 0.0176 | 2.17 | 339 | 0.0259 |
| 0.018 | 2.18 | 340 | 0.0260 |
| 0.0188 | 2.18 | 341 | 0.0265 |
| 0.0198 | 2.19 | 342 | 0.0262 |
| 0.0172 | 2.2 | 343 | 0.0268 |
| 0.0184 | 2.2 | 344 | 0.0271 |
| 0.0192 | 2.21 | 345 | 0.0273 |
| 0.0203 | 2.21 | 346 | 0.0277 |
| 0.0221 | 2.22 | 347 | 0.0283 |
| 0.0189 | 2.23 | 348 | 0.0282 |
| 0.0196 | 2.23 | 349 | 0.0289 |
| 0.0205 | 2.24 | 350 | 0.0288 |
| 0.0168 | 2.25 | 351 | 0.0291 |
| 0.0176 | 2.25 | 352 | 0.0294 |
| 0.0165 | 2.26 | 353 | 0.0295 |
| 0.0148 | 2.27 | 354 | 0.0301 |
| 0.0178 | 2.27 | 355 | 0.0296 |
| 0.0163 | 2.28 | 356 | 0.0301 |
| 0.0203 | 2.28 | 357 | 0.0303 |
| 0.0163 | 2.29 | 358 | 0.0301 |
| 0.0183 | 2.3 | 359 | 0.0301 |
| 0.0266 | 2.3 | 360 | 0.0299 |
| 0.0295 | 2.31 | 361 | 0.0295 |
| 0.0293 | 2.32 | 362 | 0.0291 |
| 0.0248 | 2.32 | 363 | 0.0281 |
| 0.0285 | 2.33 | 364 | 0.0273 |
| 0.0252 | 2.34 | 365 | 0.0262 |
| 0.0222 | 2.34 | 366 | 0.0257 |
| 0.0208 | 2.35 | 367 | 0.0245 |
| 0.0194 | 2.36 | 368 | 0.0239 |
| 0.0217 | 2.36 | 369 | 0.0238 |
| 0.0183 | 2.37 | 370 | 0.0237 |
| 0.0218 | 2.37 | 371 | 0.0238 |
| 0.0224 | 2.38 | 372 | 0.0233 |
| 0.023 | 2.39 | 373 | 0.0235 |
| 0.0205 | 2.39 | 374 | 0.0235 |
| 0.0215 | 2.4 | 375 | 0.0237 |
| 0.0189 | 2.41 | 376 | 0.0238 |
| 0.0233 | 2.41 | 377 | 0.0236 |
| 0.0225 | 2.42 | 378 | 0.0238 |
| 0.0196 | 2.43 | 379 | 0.0233 |
| 0.0224 | 2.43 | 380 | 0.0232 |
| 0.0214 | 2.44 | 381 | 0.0234 |
| 0.0187 | 2.44 | 382 | 0.0233 |
| 0.0199 | 2.45 | 383 | 0.0231 |
| 0.0227 | 2.46 | 384 | 0.0231 |
| 0.0199 | 2.46 | 385 | 0.0231 |
| 0.0248 | 2.47 | 386 | 0.0231 |
| 0.0196 | 2.48 | 387 | 0.0231 |
| 0.0214 | 2.48 | 388 | 0.0231 |
| 0.022 | 2.49 | 389 | 0.0230 |
| 0.0201 | 2.5 | 390 | 0.0232 |
| 0.0205 | 2.5 | 391 | 0.0233 |
| 0.0221 | 2.51 | 392 | 0.0231 |
| 0.0255 | 2.52 | 393 | 0.0233 |
| 0.0235 | 2.52 | 394 | 0.0232 |
| 0.0235 | 2.53 | 395 | 0.0231 |
| 0.0237 | 2.53 | 396 | 0.0232 |
| 0.0216 | 2.54 | 397 | 0.0236 |
| 0.0229 | 2.55 | 398 | 0.0232 |
| 0.0191 | 2.55 | 399 | 0.0231 |
| 0.0211 | 2.56 | 400 | 0.0231 |
| 0.02 | 2.57 | 401 | 0.0232 |
| 0.0217 | 2.57 | 402 | 0.0228 |
| 0.0228 | 2.58 | 403 | 0.0228 |
| 0.0222 | 2.59 | 404 | 0.0230 |
| 0.0197 | 2.59 | 405 | 0.0226 |
| 0.0195 | 2.6 | 406 | 0.0225 |
| 0.0208 | 2.6 | 407 | 0.0227 |
| 0.0238 | 2.61 | 408 | 0.0226 |
| 0.0247 | 2.62 | 409 | 0.0222 |
| 0.0229 | 2.62 | 410 | 0.0223 |
| 0.0208 | 2.63 | 411 | 0.0222 |
| 0.0197 | 2.64 | 412 | 0.0220 |
| 0.0215 | 2.64 | 413 | 0.0222 |
| 0.0216 | 2.65 | 414 | 0.0221 |
| 0.0194 | 2.66 | 415 | 0.0221 |
| 0.0223 | 2.66 | 416 | 0.0220 |
| 0.022 | 2.67 | 417 | 0.0220 |
| 0.0204 | 2.68 | 418 | 0.0218 |
| 0.0211 | 2.68 | 419 | 0.0219 |
| 0.0205 | 2.69 | 420 | 0.0218 |
| 0.021 | 2.69 | 421 | 0.0213 |
| 0.0206 | 2.7 | 422 | 0.0216 |
| 0.0261 | 2.71 | 423 | 0.0215 |
| 0.0234 | 2.71 | 424 | 0.0216 |
| 0.0246 | 2.72 | 425 | 0.0215 |
| 0.0242 | 2.73 | 426 | 0.0217 |
| 0.0204 | 2.73 | 427 | 0.0217 |
| 0.0214 | 2.74 | 428 | 0.0215 |
| 0.0235 | 2.75 | 429 | 0.0216 |
| 0.0196 | 2.75 | 430 | 0.0213 |
| 0.0223 | 2.76 | 431 | 0.0213 |
| 0.0239 | 2.76 | 432 | 0.0211 |
| 0.0224 | 2.77 | 433 | 0.0210 |
| 0.025 | 2.78 | 434 | 0.0210 |
| 0.0228 | 2.78 | 435 | 0.0210 |
| 0.0182 | 2.79 | 436 | 0.0207 |
| 0.0214 | 2.8 | 437 | 0.0208 |
| 0.022 | 2.8 | 438 | 0.0212 |
| 0.0233 | 2.81 | 439 | 0.0205 |
| 0.0209 | 2.82 | 440 | 0.0208 |
| 0.0194 | 2.82 | 441 | 0.0206 |
| 0.0215 | 2.83 | 442 | 0.0205 |
| 0.0242 | 2.84 | 443 | 0.0204 |
| 0.0219 | 2.84 | 444 | 0.0205 |
| 0.0217 | 2.85 | 445 | 0.0205 |
| 0.0237 | 2.85 | 446 | 0.0205 |
| 0.0214 | 2.86 | 447 | 0.0205 |
| 0.0224 | 2.87 | 448 | 0.0201 |
| 0.0199 | 2.87 | 449 | 0.0206 |
| 0.0219 | 2.88 | 450 | 0.0202 |
| 0.022 | 2.89 | 451 | 0.0202 |
| 0.0224 | 2.89 | 452 | 0.0204 |
| 0.0214 | 2.9 | 453 | 0.0200 |
| 0.0185 | 2.91 | 454 | 0.0201 |
| 0.0219 | 2.91 | 455 | 0.0202 |
| 0.0204 | 2.92 | 456 | 0.0203 |
| 0.0198 | 2.92 | 457 | 0.0201 |
| 0.0214 | 2.93 | 458 | 0.0202 |
| 0.0184 | 2.94 | 459 | 0.0203 |
| 0.0191 | 2.94 | 460 | 0.0200 |
| 0.0221 | 2.95 | 461 | 0.0205 |
| 0.0192 | 2.96 | 462 | 0.0204 |
| 0.0186 | 2.96 | 463 | 0.0204 |
| 0.0176 | 2.97 | 464 | 0.0201 |
| 0.0209 | 2.98 | 465 | 0.0203 |
| 0.0233 | 2.98 | 466 | 0.0200 |
| 0.0189 | 2.99 | 467 | 0.0204 |
| 0.0214 | 3.0 | 468 | 0.0201 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0a0+32f93b1
- Datasets 2.17.1
- Tokenizers 0.15.2
|
WoopsFactory/pretrained | WoopsFactory | 2024-02-25T10:21:34Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-22T22:05:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RishavB777/Gemma-2B-finetuned-4bit_q | RishavB777 | 2024-02-25T10:07:01Z | 77 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"dataset:Abirate/english_quotes",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-02-25T09:15:34Z | ---
license: mit
datasets:
- Abirate/english_quotes
--- |
mlwong/NPC-Bert | mlwong | 2024-02-25T10:04:53Z | 115 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"Nasopharyngeal carcinoma",
"Cancer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-09-10T07:53:46Z | ---
thumbnail: "url to a thumbnail used in social sharing"
tags:
- Nasopharyngeal carcinoma
- Cancer
license: "mit"
widget:
- text: "Nasopharyngeal carcinoma confined in the [MASK]."
example_title: "Example 1"
- text: "Nodal metastases in the left side of the [MASK]."
example_title: "Example 2"
- text: "Small bilateral cervical [MASK] with unusual distribution."
example_title: "Example 3"
---
# **IMPORTANT**
**>>> This is an outdated model, please see my [space](https://huggingface.co/spaces/mlwong/npc-bert-demo) for a more updated version. <<<**
---
# Background--
This model was built on Microsoft's BERT trained on PubMed uncased database (`microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext`). A number of (~500) radiology reports for staging nasopharyngeal carcinoma (NPC) written in our center by board-certified radiologist were retrospectively retrieved with ethics approval . To focus on NPC, incidental findings and unrelated observations are removed prior to training. In addition, the abbreviations for structures were replaced by the original words to facilitate the model of learning suffixes and prefixes that might indicate geographical locations (e.g. L neck -> left neck, IJC -> internal jugular chain).
A tokenizer was trained based on the original PubMed version, and the radiology reports were used to fine tune the PubMedBert. This fine tuned model has the weakness of unable to identify phrase or multi-word nouns, e.g. "nodal metastatases" is considered two separate words such that the BERT module tends to fill "nodes" when these two words are masked.
This model serve as a pilot analysis of whether it is possible to adopt a transformer based deep learning for radiology report corpus of NPC.
# Affiliations
Imaging and Interventional Radiology,
Chinese University of Hong Kong
|
ryusangwon/3230_Llama-2-7b-hf | ryusangwon | 2024-02-25T09:45:08Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"dataset:samsum",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-02-25T09:45:02Z | ---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: 3230_Llama-2-7b-hf
results: []
library_name: peft
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 3230_Llama-2-7b-hf
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the samsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.4.0
- Transformers 4.36.2
- Pytorch 2.0.1+cu117
- Datasets 2.15.0
- Tokenizers 0.15.0
|
mwalol/loutish-dalmatian | mwalol | 2024-02-25T09:11:33Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"conversational",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-25T09:06:32Z | ---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
---
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` library installed.
```bash
pip install transformers==4.36.1
```
Also make sure you are providing your huggingface token to the pipeline if the model is lying in a private repo.
- Either leave `token=True` in the `pipeline` and login to hugginface_hub by running
```python
import huggingface_hub
huggingface_hub.login(<ACCESS_TOKEN>)
```
- Or directly pass your <ACCESS_TOKEN> to `token` in the `pipeline`
```python
from transformers import pipeline
generate_text = pipeline(
model="mwalol/loutish-dalmatian",
torch_dtype="auto",
trust_remote_code=True,
use_fast=True,
device_map={"": "cuda:0"},
token=True,
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=0,
max_new_tokens=1,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.0),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?</s><|answer|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`.
```python
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"mwalol/loutish-dalmatian",
use_fast=True,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"mwalol/loutish-dalmatian",
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=0,
max_new_tokens=1,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.0),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "mwalol/loutish-dalmatian" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?</s><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=True,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
min_new_tokens=0,
max_new_tokens=1,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.0),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Quantization and sharding
You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
## Model Architecture
```
MistralForCausalLM(
(model): MistralModel(
(embed_tokens): Embedding(32000, 4096, padding_idx=2)
(layers): ModuleList(
(0-31): 32 x MistralDecoderLayer(
(self_attn): MistralAttention(
(q_proj): Linear(in_features=4096, out_features=4096, bias=False)
(k_proj): Linear(in_features=4096, out_features=1024, bias=False)
(v_proj): Linear(in_features=4096, out_features=1024, bias=False)
(o_proj): Linear(in_features=4096, out_features=4096, bias=False)
(rotary_emb): MistralRotaryEmbedding()
)
(mlp): MistralMLP(
(gate_proj): Linear(in_features=4096, out_features=14336, bias=False)
(up_proj): Linear(in_features=4096, out_features=14336, bias=False)
(down_proj): Linear(in_features=14336, out_features=4096, bias=False)
(act_fn): SiLU()
)
(input_layernorm): MistralRMSNorm()
(post_attention_layernorm): MistralRMSNorm()
)
)
(norm): MistralRMSNorm()
)
(lm_head): Linear(in_features=4096, out_features=32000, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it. |
kittysec/kittykatgamer_RVC_V2 | kittysec | 2024-02-25T08:51:08Z | 0 | 0 | null | [
"region:us"
] | null | 2024-02-25T08:47:26Z | A RVC_V2 model from KittyKatGamer, trained for 100 epoch |
furrutiav/bert_qa_extractor_cockatiel_2022_ulra_by_question_nllf_v0_signal_it_129 | furrutiav | 2024-02-25T08:48:21Z | 104 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-02-25T08:45:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AbetoRivas/Cantautor32 | AbetoRivas | 2024-02-25T08:39:25Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | 2024-02-25T08:39:21Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/IMG-20231225-WA0018.jpg
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: null
---
# Alberto
<Gallery />
## Model description
Cantando este Chico

## Download model
[Download](/AbetoRivas/Cantautor32/tree/main) them in the Files & versions tab.
|
Litzy619/V0224P1 | Litzy619 | 2024-02-25T08:38:06Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:yahma/llama-7b-hf",
"base_model:finetune:yahma/llama-7b-hf",
"license:other",
"region:us"
] | null | 2024-02-25T05:00:18Z | ---
license: other
base_model: yahma/llama-7b-hf
tags:
- generated_from_trainer
model-index:
- name: V0224P1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0224P1
This model is a fine-tuned version of [yahma/llama-7b-hf](https://huggingface.co/yahma/llama-7b-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7339
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 20
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0572 | 0.13 | 10 | 0.9574 |
| 0.9041 | 0.26 | 20 | 0.8622 |
| 0.8373 | 0.39 | 30 | 0.8201 |
| 0.7968 | 0.52 | 40 | 0.7984 |
| 0.784 | 0.65 | 50 | 0.7833 |
| 0.7691 | 0.78 | 60 | 0.7743 |
| 0.7479 | 0.91 | 70 | 0.7653 |
| 0.735 | 1.04 | 80 | 0.7590 |
| 0.6958 | 1.17 | 90 | 0.7547 |
| 0.7106 | 1.3 | 100 | 0.7509 |
| 0.7013 | 1.43 | 110 | 0.7463 |
| 0.6914 | 1.55 | 120 | 0.7422 |
| 0.6826 | 1.68 | 130 | 0.7398 |
| 0.6926 | 1.81 | 140 | 0.7380 |
| 0.6971 | 1.94 | 150 | 0.7349 |
| 0.6674 | 2.07 | 160 | 0.7370 |
| 0.6406 | 2.2 | 170 | 0.7360 |
| 0.6539 | 2.33 | 180 | 0.7351 |
| 0.6457 | 2.46 | 190 | 0.7352 |
| 0.6487 | 2.59 | 200 | 0.7343 |
| 0.6498 | 2.72 | 210 | 0.7339 |
| 0.6483 | 2.85 | 220 | 0.7340 |
| 0.6499 | 2.98 | 230 | 0.7339 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
haripritam/gemma2b-it-openhermes20k | haripritam | 2024-02-25T08:36:19Z | 108 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-25T08:31:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DragosGorduza/FRPile_GPL_test_pipeline_DragosGorduza-FRPile_MLM_Basel_Roberta-MISTRAL_FULL-notrescaled_70000 | DragosGorduza | 2024-02-25T08:25:59Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-02-25T08:25:21Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 48151 with parameters:
```
{'batch_size': 48, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 90000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
DragosGorduza/FRPile_GPL_test_pipeline_DragosGorduza-FRPile_MLM_Basel_Roberta-MISTRAL_FULL-notrescaled_10000 | DragosGorduza | 2024-02-25T08:25:20Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-02-20T15:31:26Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 48151 with parameters:
```
{'batch_size': 48, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 90000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
DragosGorduza/FRPile_GPL_test_pipeline_DragosGorduza-FRPile_MLM_Basel_Roberta-MISTRAL_FULL-notrescaled_80000 | DragosGorduza | 2024-02-25T08:24:45Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-02-25T08:24:03Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 48151 with parameters:
```
{'batch_size': 48, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 90000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
DragosGorduza/FRPile_GPL_test_pipeline_DragosGorduza-FRPile_MLM_Basel_Roberta-MISTRAL_FULL-notrescaled_60000 | DragosGorduza | 2024-02-25T08:24:01Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-02-20T15:30:53Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 48151 with parameters:
```
{'batch_size': 48, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 90000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
DragosGorduza/FRPile_GPL_test_pipeline_DragosGorduza-FRPile_MLM_Basel_Roberta-MISTRAL_FULL-notrescaled_40000 | DragosGorduza | 2024-02-25T08:22:56Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-02-20T15:32:42Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 48151 with parameters:
```
{'batch_size': 48, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 90000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
DragosGorduza/FRPile_GPL_test_pipeline_DragosGorduza-FRPile_MLM_Basel_Roberta-MISTRAL_FULL-notrescaled_50000 | DragosGorduza | 2024-02-25T08:22:18Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-02-20T15:32:02Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 48151 with parameters:
```
{'batch_size': 48, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 90000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
DragosGorduza/FRPile_GPL_test_pipeline_DragosGorduza-FRPile_MLM_Basel_Roberta-MISTRAL_FULL-notrescaled_30000 | DragosGorduza | 2024-02-25T08:21:43Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-02-20T15:33:20Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 48151 with parameters:
```
{'batch_size': 48, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 90000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
DragosGorduza/FRPile_GPL_test_pipeline_DragosGorduza-FRPile_MLM_Basel_Roberta-MISTRAL_FULL-notrescaled_20000 | DragosGorduza | 2024-02-25T08:21:07Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-02-20T15:30:09Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 48151 with parameters:
```
{'batch_size': 48, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 90000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
heldJan/llama-2-7b-froozen_CLIP_test_Train_only_projection | heldJan | 2024-02-25T08:19:17Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"VideoChatGPT",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-24T21:14:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
arbitropy/BConvQA | arbitropy | 2024-02-25T08:12:59Z | 104 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:csebuetnlp/banglat5",
"base_model:finetune:csebuetnlp/banglat5",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-25T08:10:44Z | ---
base_model: csebuetnlp/banglat5
tags:
- generated_from_trainer
model-index:
- name: BConvQA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BConvQA
This model is a fine-tuned version of [csebuetnlp/banglat5](https://huggingface.co/csebuetnlp/banglat5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1670
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 1.9883 | 0.15 | 10000 | 1.7429 |
| 1.6691 | 0.3 | 20000 | 1.4494 |
| 1.5742 | 0.45 | 30000 | 1.3819 |
| 1.46 | 0.59 | 40000 | 1.3138 |
| 1.4805 | 0.74 | 50000 | 1.2829 |
| 1.4381 | 0.89 | 60000 | 1.2696 |
| 1.3504 | 1.04 | 70000 | 1.2356 |
| 1.3064 | 1.19 | 80000 | 1.2144 |
| 1.2817 | 1.34 | 90000 | 1.2073 |
| 1.2709 | 1.48 | 100000 | 1.2058 |
| 1.2348 | 1.63 | 110000 | 1.1814 |
| 1.1828 | 1.78 | 120000 | 1.1848 |
| 1.324 | 1.93 | 130000 | 1.1670 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.1+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
oliMLeng/distilbert-base-uncased-finetuned-emotion | oliMLeng | 2024-02-25T08:12:49Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-01-16T20:26:07Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.931
- name: F1
type: f1
value: 0.931331557456777
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1754
- Accuracy: 0.931
- F1: 0.9313
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.645 | 1.0 | 500 | 0.2154 | 0.927 | 0.9263 |
| 0.1846 | 2.0 | 1000 | 0.1754 | 0.931 | 0.9313 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.12.1
- Datasets 1.16.1
- Tokenizers 0.15.2
|
Ai-Marshal/Mixtral-8x7B-Instruct-v0.1-Finetuned | Ai-Marshal | 2024-02-25T07:59:38Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-02-20T20:20:01Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nguyenthanhdo/unsloth-sft-test | nguyenthanhdo | 2024-02-25T07:57:54Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"unsloth",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-25T07:51:40Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yinani24/my_distilbert_ft_model_2 | yinani24 | 2024-02-25T07:54:40Z | 5 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"multiple-choice",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2023-11-12T04:54:23Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: yinani24/my_distilbert_ft_model_2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# yinani24/my_distilbert_ft_model_2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7662
- Validation Loss: 0.7765
- Train Accuracy: 0.6875
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 0.0001, 'decay_steps': 25, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.0922 | 1.0961 | 0.5 | 0 |
| 1.0921 | 1.0474 | 0.6875 | 1 |
| 0.9923 | 0.8858 | 0.75 | 2 |
| 0.7662 | 0.7765 | 0.6875 | 3 |
### Framework versions
- Transformers 4.37.2
- TensorFlow 2.15.0
- Datasets 2.17.1
- Tokenizers 0.15.2
|
roleplay4fun/unsloth-sft-test | roleplay4fun | 2024-02-25T07:51:39Z | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:NeverSleep/Noromaid-7b-v0.2",
"base_model:finetune:NeverSleep/Noromaid-7b-v0.2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-02-25T07:51:39Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: NeverSleep/Noromaid-7b-v0.2
---
# Uploaded model
- **Developed by:** roleplay4fun
- **License:** apache-2.0
- **Finetuned from model :** NeverSleep/Noromaid-7b-v0.2
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Kar1suMAX/bert-base-japanese-v3-jsts | Kar1suMAX | 2024-02-25T07:43:52Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-25T07:43:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bartowski/Einstein-v4-7B-exl2 | bartowski | 2024-02-25T07:34:14Z | 5 | 3 | null | [
"axolotl",
"generated_from_trainer",
"text-generation",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-02-25T07:20:59Z | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: Einstein-v4-7B
results: []
quantized_by: bartowski
pipeline_tag: text-generation
---
## Exllama v2 Quantizations of Einstein-v4-7B
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.14">turboderp's ExLlamaV2 v0.0.14</a> for quantization.
<b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/Weyaxi/Einstein-v4-7B
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
| ----- | ---- | ------- | ------ | ------ | ------ | ------------ |
| [8_0](https://huggingface.co/bartowski/Einstein-v4-7B-exl2/tree/8_0) | 8.0 | 8.0 | 8.4 GB | 9.8 GB | 11.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/bartowski/Einstein-v4-7B-exl2/tree/6_5) | 6.5 | 8.0 | 7.2 GB | 8.6 GB | 10.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/bartowski/Einstein-v4-7B-exl2/tree/5_0) | 5.0 | 6.0 | 6.0 GB | 7.4 GB | 9.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
| [4_25](https://huggingface.co/bartowski/Einstein-v4-7B-exl2/tree/4_25) | 4.25 | 6.0 | 5.3 GB | 6.7 GB | 8.7 GB | GPTQ equivalent bits per weight, slightly higher quality. |
| [3_5](https://huggingface.co/bartowski/Einstein-v4-7B-exl2/tree/3_5) | 3.5 | 6.0 | 4.7 GB | 6.1 GB | 8.1 GB | Lower quality, only use if you have to. |
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Einstein-v4-7B-exl2 Einstein-v4-7B-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `Einstein-v4-7B-exl2`:
```shell
mkdir Einstein-v4-7B-exl2
huggingface-cli download bartowski/Einstein-v4-7B-exl2 --local-dir Einstein-v4-7B-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
Linux:
```shell
mkdir Einstein-v4-7B-exl2-6_5
huggingface-cli download bartowski/Einstein-v4-7B-exl2 --revision 6_5 --local-dir Einstein-v4-7B-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
mkdir Einstein-v4-7B-exl2-6.5
huggingface-cli download bartowski/Einstein-v4-7B-exl2 --revision 6_5 --local-dir Einstein-v4-7B-exl2-6.5 --local-dir-use-symlinks False
```
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski |
Heng666/taiwan-kapok-300m-base | Heng666 | 2024-02-25T07:28:17Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-25T07:20:39Z | ---
tags:
- generated_from_trainer
- mistral
model-index:
- name: taiwan-kapok-300M
results: []
license: apache-2.0
language:
- zh
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# taiwan-kapok-300M
- Loss: 2.7894
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0006
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 64
- total_train_batch_size: 512
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.95) and epsilon=0.0001
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6962 | 3.35 | 5000 | 2.7894 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.5
- Tokenizers 0.14.1 |
nickmiller795/rl_course_vizdoom_health_gathering_supreme | nickmiller795 | 2024-02-25T07:26:19Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-25T07:26:13Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.44 +/- 5.69
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r nickmiller795/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
chetana123/my-pet-dog | chetana123 | 2024-02-25T07:24:12Z | 7 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-02-25T07:20:32Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by chetana123 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 21711A0537
Sample pictures of this concept:

|
LarryAIDraw/yamada_anna_lora_ver2 | LarryAIDraw | 2024-02-25T07:22:28Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-02-25T07:20:35Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/250431/yamada-anna-the-dangers-in-my-heart |
LarryAIDraw/artkoikoi_Alisa_Southerncross_v3 | LarryAIDraw | 2024-02-25T07:15:10Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-02-25T06:51:44Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/49815/alisa-southerncross-keroro-gunsou-artkoikoi |
furrutiav/bert_qa_extractor_cockatiel_2022_ulra_by_question_type_nllf_v0_signal_it_129 | furrutiav | 2024-02-25T07:10:19Z | 104 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-02-25T07:06:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ryusangwon/253_Llama-2-7b-hf | ryusangwon | 2024-02-25T06:41:49Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"dataset:samsum",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-02-25T06:41:44Z | ---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: 253_Llama-2-7b-hf
results: []
library_name: peft
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 253_Llama-2-7b-hf
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the samsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.4.0
- Transformers 4.36.2
- Pytorch 2.0.1+cu117
- Datasets 2.15.0
- Tokenizers 0.15.0
|
hotchpotch/Swallow-13b-stf-qa-context-jaqket-jsquad-gptq | hotchpotch | 2024-02-25T06:37:12Z | 4 | 0 | transformers | [
"transformers",
"llama",
"text-generation",
"ja",
"dataset:hotchpotch/jaqket_v1_qa_wikija_context",
"dataset:shunk031/JGLUE",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2024-02-25T06:03:08Z | ---
license: apache-2.0
datasets:
- hotchpotch/jaqket_v1_qa_wikija_context
- shunk031/JGLUE
language:
- ja
---
## 🤔 これは何?
- 🧠 学習元LLMモデル
- 🔗 https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-hf
- 📚 利用データセット
- 📖 JAQKET (dev / unuse) + JSQuAD(train・サンプリングで一部を利用)
- 🛠️ 学習手法
- 📈 SFT(Supervised fine-tuning)
- 🧮 量子化手法
- 🤖 AutoGPTQ
- 💡 このモデルのできること
- 💬 コンテキスト付きのQ&Aに対して、回答を生成
## 🚀 実行例
```
import torch
from transformers import AutoTokenizer
from auto_gptq import AutoGPTQForCausalLM # type: ignore
MODEL_NAME = "hotchpotch/Swallow-13b-stf-qa-context-jaqket-jsquad-gptq"
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
tokenizer.pad_token = tokenizer.eos_token
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model = AutoGPTQForCausalLM.from_quantized(
MODEL_NAME,
device=device,
use_safetensors=True,
inject_fused_mlp=True,
inject_fused_attention=True,
use_flash_attention_2=True,
)
def generate(prompt: str) -> str:
token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
with torch.no_grad():
output_ids = model.generate(
input_ids=token_ids.to(model.device), # type: ignore
max_new_tokens=12,
do_sample=True,
top_p=0.95,
temperature=0.25,
repetition_penalty=1.3,
pad_token_id=tokenizer.pad_token_id,
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id,
)
output = tokenizer.decode(output_ids[0])
output = output.replace(prompt, "")
output = output.split(tokenizer.eos_token)[0]
return output.strip()
prompt_template = """
以下に、あるタスクを説明する指示があり、それに付随する入力が更なる文脈を提供しています。リクエストを適切に完了するための回答を記述してください。
### 指示:
{question}
### 入力:
{context}
### 応答:
""".strip()
question = "楽曲『約束はいらない』でデビューした、声優は誰?"
context = """
Suara 大阪府立豊中高等学校を経て大阪外国語大学でインドネシア語を専攻中にバンド・ユニットを組んで音楽活動を始めた。普段はお笑い番組が大好きなこともあってよく喋るほうだが、東京の仕事で標準語の喋りをする時は、
早見沙織 声優デビュー時より、数多くの主題歌やキャラクターソングを担当し、バラードからポップス、ヒットソングのカバー曲や英語の楽曲など、様々な曲を歌いこなす。2009年には吉田仁美とのユニット「blue dro
約束はいらない 「約束はいらない」(やくそくはいらない)は、坂本真綾のデビューシングル。
約束はいらない 坂本真綾の歌手デビュー作品。当時坂本はまだ無名の声優であったが、同曲がテーマソングとなったアニメ『天空のエスカフローネ』とともに知名度を上げることとなる。後に「指輪」が同アニメの劇場版映画の主題歌とな
坂本真綾 本格的な歌手活動は、1996年にテレビアニメ『天空のエスカフローネ』へ出演したことがきっかけで始めており、同作のオープニングテーマソングである「約束はいらない」(岩里祐穂作詞、菅野よう子作曲)をシング
""".strip()
prompt = prompt_template.format(question=question, context=context)
print(generate(prompt))
#=> 坂本真綾(さかもとまあや / MAAY
``` |
cquentin48/open_domain_vector_dim_qa | cquentin48 | 2024-02-25T06:29:11Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-02-25T06:28:44Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# cquentin48/open_domain_vector_dim_qa
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('cquentin48/open_domain_vector_dim_qa')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=cquentin48/open_domain_vector_dim_qa)
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 3646 with parameters:
```
{'batch_size': 24}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 72,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
(2): Normalize()
)
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
verach3n/translation-en-to-tr-1 | verach3n | 2024-02-25T06:22:41Z | 42 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-tatoeba-en-tr",
"base_model:finetune:Helsinki-NLP/opus-tatoeba-en-tr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-22T11:39:11Z | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-tatoeba-en-tr
tags:
- generated_from_trainer
model-index:
- name: opus-tatoeba-en-tr-finetuned-en-to-tr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-tatoeba-en-tr-finetuned-en-to-tr
This model is a fine-tuned version of [Helsinki-NLP/opus-tatoeba-en-tr](https://huggingface.co/Helsinki-NLP/opus-tatoeba-en-tr) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0240
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.4348 | 1.0 | 12860 | 3.0240 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
Litzy619/V0224O2 | Litzy619 | 2024-02-25T06:22:28Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:yahma/llama-7b-hf",
"base_model:finetune:yahma/llama-7b-hf",
"license:other",
"region:us"
] | null | 2024-02-25T00:06:50Z | ---
license: other
base_model: yahma/llama-7b-hf
tags:
- generated_from_trainer
model-index:
- name: V0224O2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0224O2
This model is a fine-tuned version of [yahma/llama-7b-hf](https://huggingface.co/yahma/llama-7b-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7463
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 20
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.92 | 0.13 | 10 | 2.0370 |
| 1.2885 | 0.26 | 20 | 1.0103 |
| 0.9359 | 0.39 | 30 | 0.8890 |
| 0.8522 | 0.52 | 40 | 0.8425 |
| 0.8213 | 0.65 | 50 | 0.8151 |
| 0.798 | 0.78 | 60 | 0.8006 |
| 0.7714 | 0.91 | 70 | 0.7896 |
| 0.7598 | 1.04 | 80 | 0.7814 |
| 0.7269 | 1.17 | 90 | 0.7729 |
| 0.7398 | 1.3 | 100 | 0.7685 |
| 0.7291 | 1.43 | 110 | 0.7637 |
| 0.7193 | 1.55 | 120 | 0.7597 |
| 0.7097 | 1.68 | 130 | 0.7571 |
| 0.7198 | 1.81 | 140 | 0.7536 |
| 0.7229 | 1.94 | 150 | 0.7491 |
| 0.6964 | 2.07 | 160 | 0.7503 |
| 0.6717 | 2.2 | 170 | 0.7484 |
| 0.6848 | 2.33 | 180 | 0.7472 |
| 0.6767 | 2.46 | 190 | 0.7471 |
| 0.6788 | 2.59 | 200 | 0.7468 |
| 0.6801 | 2.72 | 210 | 0.7463 |
| 0.6778 | 2.85 | 220 | 0.7463 |
| 0.6803 | 2.98 | 230 | 0.7463 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Siddartha10/gemma-2b-it_sarvam_ai_dataset | Siddartha10 | 2024-02-25T06:19:05Z | 77 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"mlx",
"conversational",
"dataset:sarvamai/samvaad-hi-v1",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-25T04:15:38Z | ---
license: other
library_name: transformers
tags:
- mlx
widget:
- text: |
<start_of_turn>user
How does the brain work?<end_of_turn>
<start_of_turn>model
inference:
parameters:
max_new_tokens: 200
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged-in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
datasets:
- sarvamai/samvaad-hi-v1
---
# Siddartha10/gemma-2b-it_sarvam_ai_dataset
This model was converted to MLX format from [`google/gemma-2b-it`]().
Refer to the [original model card](https://huggingface.co/google/gemma-2b-it) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Siddartha10/gemma-2b-it_sarvam_ai_dataset")
response = generate(model, tokenizer, prompt="hello", verbose=True)
``` |
netcat420/MHENN7 | netcat420 | 2024-02-25T06:08:49Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-25T06:03:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ShaikArshiya/my-pet-dog | ShaikArshiya | 2024-02-25T06:01:47Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-02-25T05:57:47Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by ShaikArshiya following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:

|
furrutiav/bert_qa_extractor_cockatiel_2022_ulra_by_kmeans_Q_nllf_s_nllf_signal_it_137 | furrutiav | 2024-02-25T06:01:21Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-02-25T06:00:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ThuyNT03/CS505_COQE_viT5_Prompting10_ASPOL_vcheck2 | ThuyNT03 | 2024-02-25T05:53:31Z | 104 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"base_model:finetune:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-25T04:46:38Z | ---
license: mit
base_model: VietAI/vit5-large
tags:
- generated_from_trainer
model-index:
- name: CS505_COQE_viT5_Prompting10_ASPOL_vcheck2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505_COQE_viT5_Prompting10_ASPOL_vcheck2
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.1
|
Pubudu/prefix_tuning_12_par_bn_rf_4_dinamina_zwj_fix | Pubudu | 2024-02-25T05:53:12Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"mbart",
"adapterhub:summarization/dinamina_5100",
"dataset:dinamina_5100",
"region:us"
] | null | 2024-02-25T05:52:34Z | ---
tags:
- adapter-transformers
- mbart
- adapterhub:summarization/dinamina_5100
datasets:
- dinamina_5100
---
# Adapter `Pubudu/prefix_tuning_12_par_bn_rf_4_dinamina_zwj_fix` for facebook/mbart-large-50
An [adapter](https://adapterhub.ml) for the `facebook/mbart-large-50` model that was trained on the [summarization/dinamina_5100](https://adapterhub.ml/explore/summarization/dinamina_5100/) dataset.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("facebook/mbart-large-50")
adapter_name = model.load_adapter("Pubudu/prefix_tuning_12_par_bn_rf_4_dinamina_zwj_fix", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
renubalyan/ppo-LunarLander-v2 | renubalyan | 2024-02-25T05:53:06Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-25T05:52:50Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 273.65 +/- 16.63
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
weifar/codellama-13b-SCdetecting-153 | weifar | 2024-02-25T05:44:46Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-25T05:36:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ThuyNT03/CS505_COQE_viT5_Prompting16_ASPOL | ThuyNT03 | 2024-02-25T05:44:02Z | 105 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"base_model:finetune:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-25T04:54:54Z | ---
license: mit
base_model: VietAI/vit5-large
tags:
- generated_from_trainer
model-index:
- name: CS505_COQE_viT5_Prompting16_ASPOL
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505_COQE_viT5_Prompting16_ASPOL
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
ambrosfitz/gemma-history-chat_v0.1 | ambrosfitz | 2024-02-25T05:39:46Z | 107 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-25T05:18:58Z | ---
library_name: transformers
tags: []
---
|
mi-rei/CT_clinical-longformer_I-II-III_efficient_10e | mi-rei | 2024-02-25T05:34:08Z | 118 | 0 | transformers | [
"transformers",
"safetensors",
"longformer",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-25T05:33:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
shashankss09/mistralai-probusiness-Finetune-1 | shashankss09 | 2024-02-25T05:32:26Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-25T05:27:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ThuyNT03/CS505_COQE_viT5_Prompting0_ASPOL_vcheck2 | ThuyNT03 | 2024-02-25T05:24:12Z | 104 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"base_model:finetune:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-25T04:29:16Z | ---
license: mit
base_model: VietAI/vit5-large
tags:
- generated_from_trainer
model-index:
- name: CS505_COQE_viT5_Prompting0_ASPOL_vcheck2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505_COQE_viT5_Prompting0_ASPOL_vcheck2
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
Kar1suMAX/bert-base-japanese-v3-jnli | Kar1suMAX | 2024-02-25T05:23:37Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-25T05:23:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
wildzzz/36frames | wildzzz | 2024-02-25T05:20:24Z | 1 | 0 | diffusers | [
"diffusers",
"art",
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-02-25T04:59:24Z | ---
license: creativeml-openrail-m
pipeline_tag: text-to-image
tags:
- art
- stable-diffusion
library_name: diffusers
---
Recommended settings for generation:
Fine art photography, (amateur photo), (analog photo:1.2,film grain),({prompt}),vivid colours,realistic,photographic, (natural colors, correct white balance, color correction, dehaze,clarity), background rich in details, conceptual image,masterpiece, award winning photography
Steps: 28, Sampler: DPM++ 2M SDE Karras, CFG scale: 6, ADetailer, Hires prompt: Hires upscale: 1.35, Hires steps: 14, Hires upscaler: 4x_NMKD-Siax_200k
https://civitai.com/models/319576/36-frames
![00083-[number]-414915688.png](https://cdn-uploads.huggingface.co/production/uploads/638a6249fc10b2be22250918/sfbCsBmcEvZe5s8tcD5B5.png)
![00163-[number]-3904528518.png](https://cdn-uploads.huggingface.co/production/uploads/638a6249fc10b2be22250918/HWqZvanYOWkN1f23O6fFt.png)
![00070-[number]-414915675.png](https://cdn-uploads.huggingface.co/production/uploads/638a6249fc10b2be22250918/6yk_yWw6pLtGIAW9IWia7.png)
![00176-[number]-3904528531.png](https://cdn-uploads.huggingface.co/production/uploads/638a6249fc10b2be22250918/BYDHmjzVCuOiPuAi6bwho.png)
![00020-[number]-3173852746.png](https://cdn-uploads.huggingface.co/production/uploads/638a6249fc10b2be22250918/WjpxRhw22C1ActfaTBVgX.png)
![00527-[number]-2225445592.png](https://cdn-uploads.huggingface.co/production/uploads/638a6249fc10b2be22250918/SBV3nRXupGt_C4EVXV9F6.png)
![00549-[number]-298092460.png](https://cdn-uploads.huggingface.co/production/uploads/638a6249fc10b2be22250918/4cL18IYYhUVOw4ORcBCkA.png) |
ThuyNT03/CS505_COQE_viT5_Prompting0_ASPOL_vcheck1 | ThuyNT03 | 2024-02-25T05:18:54Z | 104 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"base_model:finetune:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-25T04:28:52Z | ---
license: mit
base_model: VietAI/vit5-large
tags:
- generated_from_trainer
model-index:
- name: CS505_COQE_viT5_Prompting0_ASPOL_vcheck1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505_COQE_viT5_Prompting0_ASPOL_vcheck1
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
liminerity/phigment5-slerp | liminerity | 2024-02-25T05:11:12Z | 48 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"vince62s/phi-2-psy",
"liminerity/4",
"custom_code",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-25T05:09:28Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- vince62s/phi-2-psy
- liminerity/4
---
# phigment5-slerp
phigment5-slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [vince62s/phi-2-psy](https://huggingface.co/vince62s/phi-2-psy)
* [liminerity/4](https://huggingface.co/liminerity/4)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: vince62s/phi-2-psy
layer_range: [0, 32]
- model: liminerity/4
layer_range: [0, 32]
merge_method: slerp
base_model: vince62s/phi-2-psy
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: float16
``` |
vincegmz/diffPrompt-stable-diffusion-v1-5 | vincegmz | 2024-02-25T05:06:26Z | 1 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-02-25T04:51:28Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - vincegmz/diffPrompt-stable-diffusion-v1-5
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the None dataset. You can find some example images in the following.




|
liminerity/phigment4-slerp | liminerity | 2024-02-25T05:04:14Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"mobiuslabsgmbh/aanaphi2-v0.1",
"liminerity/3",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-25T05:02:07Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- mobiuslabsgmbh/aanaphi2-v0.1
- liminerity/3
---
# phigment4-slerp
phigment4-slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [mobiuslabsgmbh/aanaphi2-v0.1](https://huggingface.co/mobiuslabsgmbh/aanaphi2-v0.1)
* [liminerity/3](https://huggingface.co/liminerity/3)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mobiuslabsgmbh/aanaphi2-v0.1
layer_range: [0, 32]
- model: liminerity/3
layer_range: [0, 32]
merge_method: slerp
base_model: mobiuslabsgmbh/aanaphi2-v0.1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: float16
``` |
sugatoray/mlx-neuralhermes-2.5-mistral-7b-q4bits | sugatoray | 2024-02-25T04:58:42Z | 10 | 0 | mlx | [
"mlx",
"safetensors",
"mistral",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"distillation",
"dpo",
"rlhf",
"en",
"dataset:mlabonne/chatml_dpo_pairs",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:finetune:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"region:us"
] | null | 2024-02-25T04:40:14Z | ---
language:
- en
license: apache-2.0
tags:
- mistral
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
- dpo
- rlhf
- mlx
datasets:
- mlabonne/chatml_dpo_pairs
base_model: teknium/OpenHermes-2.5-Mistral-7B
---
# sugatoray/mlx-neuralhermes-2.5-mistral-7b-q4bits
This model was converted to MLX format from [`mlabonne/NeuralHermes-2.5-Mistral-7B`]().
Refer to the [original model card](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("sugatoray/mlx-neuralhermes-2.5-mistral-7b-q4bits")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
brycegoh/sdxl-cn-example | brycegoh | 2024-02-25T04:58:36Z | 2 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"controlnet",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-02-25T03:26:12Z | ---
license: openrail++
library_name: diffusers
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- controlnet
inference: true
base_model: stabilityai/stable-diffusion-xl-base-1.0
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# controlnet-brycegoh/sdxl-cn-example
These are controlnet weights trained on stabilityai/stable-diffusion-xl-base-1.0 with new type of conditioning.
You can find some example images below.
prompt: red circle with blue background

prompt: cyan circle with brown floral background

## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
liminerity/phigment3-slerp | liminerity | 2024-02-25T04:55:47Z | 78 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"liminerity/merge",
"vince62s/phi-2-psy",
"custom_code",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-25T04:54:08Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- liminerity/merge
- vince62s/phi-2-psy
---
# phigment3-slerp
phigment3-slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [liminerity/merge](https://huggingface.co/liminerity/merge)
* [vince62s/phi-2-psy](https://huggingface.co/vince62s/phi-2-psy)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: liminerity/merge
layer_range: [0, 32]
- model: vince62s/phi-2-psy
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/merge
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: float16
``` |
yinani24/my_distilbert_ft_model_3 | yinani24 | 2024-02-25T04:48:54Z | 60 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"multiple-choice",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2024-01-31T21:00:45Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: yinani24/my_distilbert_ft_model_3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# yinani24/my_distilbert_ft_model_3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.9304
- Validation Loss: 0.9185
- Train Accuracy: 0.625
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 0.0001, 'decay_steps': 25, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.1026 | 1.0961 | 0.625 | 0 |
| 1.0855 | 1.0854 | 0.625 | 1 |
| 1.0607 | 1.0109 | 0.625 | 2 |
| 0.9304 | 0.9185 | 0.625 | 3 |
### Framework versions
- Transformers 4.37.2
- TensorFlow 2.15.0
- Datasets 2.17.1
- Tokenizers 0.15.2
|
Lekhasri/my-pet-dog | Lekhasri | 2024-02-25T04:45:38Z | 2 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-02-25T04:41:28Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by Lekhasri following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:

|
pszemraj/beit-large-patch16-512-film-shot-classifier | pszemraj | 2024-02-25T04:43:52Z | 8 | 1 | transformers | [
"transformers",
"onnx",
"safetensors",
"beit",
"image-classification",
"vision",
"cinematography",
"film",
"dataset:szymonrucinski/types-of-film-shots",
"base_model:microsoft/beit-large-patch16-512",
"base_model:quantized:microsoft/beit-large-patch16-512",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-02-22T22:59:05Z | ---
license: apache-2.0
tags:
- image-classification
- vision
- cinematography
- film
datasets:
- szymonrucinski/types-of-film-shots
metrics:
- accuracy
base_model: microsoft/beit-large-patch16-512
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beit-large-patch16-512: types of film shots



## Model description
This model is a fine-tuned version of [microsoft/beit-large-patch16-512](https://huggingface.co/microsoft/beit-large-patch16-512) on the szymonrucinski/types-of-film-shots dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2335
- Accuracy: 0.6763
## usage
```py
from transformers import pipeline
from PIL import Image
import requests
pipe = pipeline(
"image-classification",
model="pszemraj/beit-large-patch16-512-film-shot-classifier",
)
url = "https://cdn-uploads.huggingface.co/production/uploads/60bccec062080d33f875cd0c/9YqYvv188ZccCMSzuv0KW.png"
image = Image.open(requests.get(url, stream=True).raw)
result = pipe(image)[0]
print(result)
```
try some of these:
### class labels
The dataset contains the following labels:
```json
"id2label": {
"0": "ambiguous",
"1": "closeUp",
"2": "detail",
"3": "extremeLongShot",
"4": "fullShot",
"5": "longShot",
"6": "mediumCloseUp",
"7": "mediumShot"
},
```
as plaintext:
```txt
ambiguous, close up, detail, extreme long shot, full shot, long shot, medium close up, medium shot
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 4
- seed: 24414
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 6.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0435 | 1.0 | 393 | 1.4799 | 0.4892 |
| 1.1554 | 2.0 | 786 | 1.4938 | 0.4892 |
| 1.5041 | 3.0 | 1179 | 2.1702 | 0.3597 |
| 1.0457 | 4.0 | 1572 | 1.5413 | 0.5683 |
| 0.3315 | 5.0 | 1965 | 1.0769 | 0.6978 |
| 0.2178 | 6.0 | 2358 | 1.2335 | 0.6763 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 |
M9DX/rgh | M9DX | 2024-02-25T04:43:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-25T04:43:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
benchang1110/temp-tokenizer-a8k-d40d11 | benchang1110 | 2024-02-25T04:35:55Z | 0 | 0 | null | [
"zh",
"en",
"region:us"
] | null | 2024-02-24T12:12:19Z | ---
language:
- zh
- en
---
# temp-tokenizer-a8k-d40d11
This tokenizer is a part of the `temp` project.
* Base tokenizer: `TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T`
* Built with: `word_frequency_list`
* Vocab size: `40001`
* Tokens added (planned/actual): `8000` / `8001`
* Full config:
```json
{"build_with": "word_frequency_list", "tokens_to_add": 8000, "word_frequency_list_settings": {"word_frequency_list_name": "zetavg/tw-sinica-corpus-word-frequency", "include_words": ["。", ",", "、", "?", "!", ";", ":", "……", "~", "「", "」", "『", "』", "【", "】", "〖", "〗", "(", ")", "〔", "〕", "[", "]", "{", "}", "《", "》", "〈", "〉", "——", "──", "-", "−", "_", "・", ".", "·", "/", "\", "|", "<", ">"], "replace_rules": [{"match": {"regex": "�"}, "replace": null}, {"match": {"pos": ["Nb", "FW", null]}, "replace": null, "except": ["奧運", "中共", "國民黨", "民進黨", "新黨", "共產黨", "媽祖", "耶穌"]}, {"match": {"regex": ["^[A-Za-z0-9﹒• ]+$", "^[零一二兩三四五六七八九十廿卅百千萬億兆壹貳參肆伍陸柒捌玖拾佰仟0-9﹒•]{2,}$", "^([零一二兩三四五六七八九十廿卅百千萬億兆壹貳參肆伍陸柒捌玖拾佰仟0-9﹒•]+)$", "^[第數][零一二兩三四五六七八九十百千萬億兆0-9﹒•]+$", "^[零一二兩三四五六七八九十廿卅百千萬億兆0-9﹒•]+分之[零一二兩三四五六七八九十廿卅百千萬億兆0-9﹒•]+$", "^[零一二兩三四五六七八九十廿卅百千萬億兆0-9﹒•]+[多餘來幾成次年月日天時分點世代歲起段樓%]$", "^[零一二三四五六七八九十廿卅0-9]+(月份|年代?|世紀|學?年度|年級)$", "^(星期|週|周)[一二三四五六日]$"]}, "replace": null, "except": ["十分", "一起", "一點", "一時", "千萬", "兩三", "百分之百"]}, {"match": {"pos": "VHC", "regex": "^(.{2,})化$"}, "sub": "\\1"}, {"match": {"regex": "^(.+球)場$"}, "sub": "\\1"}, {"match": {"pos": "Nc", "regex": "^(.{2,})園區$"}, "sub": "\\1"}, {"match": {"pos": "Nc", "regex": "^(.{2,})[鄉鎮縣市區]$"}, "sub": "\\1"}, {"match": {"pos": "Nc", "regex": "^(.{2,})[界院部會署局館系所]$"}, "sub": "\\1", "except": ["委員會", "研究所", "中研院", "國科會", "資策會", "經建會", "工研院", "電信總局", "鎮公所", "事務所", "交易所", "農委會", "鄉公所", "地檢署", "警分局", "派出所", "托兒所", "消基會", "文建會", "兩廳院", "陸委會", "市議會"]}, {"match": {"pos": "Na", "regex": "^(.{2,})人$"}, "sub": "\\1", "except": ["年輕人", "負責人", "投資人", "候選人", "一家人", "當地人", "製作人"]}, {"match": {"pos": "Na", "regex": "^(.{2,3})學?家$"}, "sub": "\\1", "except": ["女人家", "婦人家", "新儒家", "窮人家", "縱橫家", "老人家", "老東家", "闊人家", "大戶人家", "婦道人家", "小戶人家", "水上人家", "諸子百家"]}, {"match": {"pos": "Na", "regex": "^副?總?([^副總]{2,})師$"}, "sub": "\\1", "except": ["中醫師", "囝仔師", "正機師", "準教師", "獸醫師", "班導師", "練馬師", "總舖師", "老像師", "新三十師", "至聖先師", "音樂大師"]}, {"match": {"pos": "Na", "regex": "^[原前]?(?:代|代理)?副?總?([^前代副總議警里首院部署局廳司處科組課股]{2,})[院部署局廳司處科組課股]?次?長$"}, "sub": "\\1", "except": ["董事長", "理事長", "秘書長", "執行長", "分局長", "縣市長", "一技之長", "省市長", "負成長", "高成長", "大家長", "小組長", "區組長", "低成長", "偵一組長", "停管隊長", "考選部長", "年增長", "正成長", "支店長", "公賣局長", "中宣部長", "小市長"]}, {"match": {"pos": "Na", "regex": "^副?總?正?([^副總正議委人隊]{2,})[委人隊]?員$"}, "sub": "\\1", "except": ["主跑員", "乘務員", "佐理員", "共黨員", "外務員", "從業員", "特派員", "義服員", "銜道員", "啦啦隊員", "指服團員"]}, {"match": {"pos": "Na", "regex": "^副(.{2,})$"}, "sub": "\\1", "except": ["副作用"]}, {"match": "大規模", "replace": "規模"}, {"match": "廢棄物", "replace": "廢棄"}, {"match": "共產主義", "replace": "共產"}, {"match": "資本主義", "replace": "資本"}, {"match": "進一步", "replace": ["一步", "進一步"]}, {"match": "差不多", "replace": ["不多", "差不多"]}, {"match": "忍不住", "replace": ["不住", "忍不住"]}, {"match": "不見得", "replace": ["見得", "不見得"]}, {"match": "有助於", "replace": ["助於", "有助於"]}, {"match": "舊金山", "replace": ["金山", "舊金山"]}, {"match": "大躍進", "replace": ["躍進", "大躍進"]}, {"match": "半導體", "replace": ["導體", "半導體"]}, {"match": "總幹事", "replace": ["幹事", "總幹事"]}, {"match": "兩廳院", "replace": ["廳院", "兩廳院"]}]}}
``` |
Litzy619/V0224O1 | Litzy619 | 2024-02-25T04:35:55Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:yahma/llama-7b-hf",
"base_model:finetune:yahma/llama-7b-hf",
"license:other",
"region:us"
] | null | 2024-02-24T22:21:32Z | ---
license: other
base_model: yahma/llama-7b-hf
tags:
- generated_from_trainer
model-index:
- name: V0224O1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0224O1
This model is a fine-tuned version of [yahma/llama-7b-hf](https://huggingface.co/yahma/llama-7b-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7376
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 20
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.5107 | 0.13 | 10 | 1.3353 |
| 1.0704 | 0.26 | 20 | 0.9178 |
| 0.8741 | 0.39 | 30 | 0.8580 |
| 0.8274 | 0.52 | 40 | 0.8184 |
| 0.8016 | 0.65 | 50 | 0.7969 |
| 0.7805 | 0.78 | 60 | 0.7831 |
| 0.7534 | 0.91 | 70 | 0.7732 |
| 0.7351 | 1.04 | 80 | 0.7642 |
| 0.6839 | 1.17 | 90 | 0.7570 |
| 0.697 | 1.3 | 100 | 0.7544 |
| 0.687 | 1.43 | 110 | 0.7497 |
| 0.6776 | 1.55 | 120 | 0.7447 |
| 0.6708 | 1.68 | 130 | 0.7426 |
| 0.6797 | 1.81 | 140 | 0.7383 |
| 0.6821 | 1.94 | 150 | 0.7340 |
| 0.6439 | 2.07 | 160 | 0.7410 |
| 0.6084 | 2.2 | 170 | 0.7397 |
| 0.6215 | 2.33 | 180 | 0.7378 |
| 0.6138 | 2.46 | 190 | 0.7389 |
| 0.6166 | 2.59 | 200 | 0.7380 |
| 0.6166 | 2.72 | 210 | 0.7376 |
| 0.615 | 2.85 | 220 | 0.7377 |
| 0.6165 | 2.98 | 230 | 0.7376 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
moficodes/gemma-2b-chat-finetune | moficodes | 2024-02-25T04:34:07Z | 110 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-25T04:31:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tSum10101/dqn-LunarLander-v2-default | tSum10101 | 2024-02-25T04:30:43Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-25T04:30:07Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -45.72 +/- 19.48
name: mean_reward
verified: false
---
# **DQN** Agent playing **LunarLander-v2**
This is a trained model of a **DQN** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Edwardwangbo/film | Edwardwangbo | 2024-02-25T04:21:57Z | 0 | 0 | flair | [
"flair",
"art",
"text-to-video",
"license:other",
"region:us"
] | text-to-video | 2024-02-25T04:07:35Z | ---
license: other
license_name: wangsfilm
license_link: LICENSE
library_name: flair
pipeline_tag: text-to-video
tags:
- art
--- |
kuotient/Seagull-13b-translation | kuotient | 2024-02-25T04:15:30Z | 18 | 6 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"translate",
"translation",
"ko",
"dataset:squarelike/sharegpt_deepl_ko_translation",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | translation | 2024-02-24T06:48:43Z | ---
license: cc-by-nc-sa-4.0
datasets:
- squarelike/sharegpt_deepl_ko_translation
language:
- ko
pipeline_tag: translation
tags:
- translate
---
## **Seagull-13b-translation 📇**

**Seagull-13b-translation** is yet another translator model, but carefully considered the following issues from existing translation models.
- Exact match of `newline` or `space`
- Not using dataset with first letter removed
- Code
- Markdown format
- LaTeX format
- etc
이런 이슈들을 충분히 체크하고 학습을 진행하였지만, 모델을 사용할 때는 이런 부분에 대한 결과를 면밀하게 살펴보는 것을 추천합니다(코드가 포함된 텍스트 등).
> If you're interested in building large-scale language models to solve a wide variety of problems in a wide variety of domains, you should consider joining [Allganize](https://allganize.career.greetinghr.com/o/65146).
For a coffee chat or if you have any questions, please do not hesitate to contact me as well! - [email protected]
This model was created as a personal experiment, unrelated to the organization I work for.
## **License**
## From original model author:
- Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License, under LLAMA 2 COMMUNITY LICENSE AGREEMENT
- Full License available at: https://huggingface.co/beomi/llama-2-koen-13b/blob/main/LICENSE
# **Model Details**
#### **Developed by**
Jisoo Kim(kuotient)
#### **Base Model**
[beomi/llama-2-koen-13b](https://huggingface.co/beomi/llama-2-koen-13b)
#### **Datasets**
- [sharegpt_deepl_ko_translation](https://huggingface.co/datasets/squarelike/sharegpt_deepl_ko_translation)
- [KOR-OpenOrca-Platypus-v3](https://huggingface.co/datasets/kyujinpy/KOR-OpenOrca-Platypus-v3)
- AIHUB
- 기술과학 분야 한-영 번역 병렬 말뭉치 데이터
- 일상생활 및 구어체 한-영 번역 병렬 말뭉치 데이터
## **Usage**
#### **Format**
It follows only **ChatML** format.
```python
<|im_start|>system
주어진 문장을 한국어로 번역하세요.<|im_end|>
<|im_start|>user
{instruction}<|im_end|>
<|im_start|>assistant
# Don't miss newline here
```
```python
<|im_start|>system
주어진 문장을 영어로 번역하세요.<|im_end|>
<|im_start|>user
{instruction}<|im_end|>
<|im_start|>assistant
# Don't miss newline here
```
#### Output example
원문:
> A particle's wave function, $\psi(x)$, is given by $$\psi(x)=\begin{cases} 3x & \text{if } -1 \leq x \leq 0 \\ 3(1-x) & \text{if } 0 < x \leq 1 \\ 0 & \text{otherwise} \end{cases}$$ Compute the Fourier transform, $\tilde{\psi}(k)$, of the wave function $\psi(x)$ and show that it satisfies the Fourier inversion theorem, i.e., $\psi(x) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} \tilde{\psi}(k) e^{ikx} \mathrm{d}k$.
Seagull-13b-translation:
> 입자의 파동 함수 $\psi(x)$는 다음과 같이 주어집니다. $$\psi(x)=\begin{cases} 3x & \text{if } -1 \leq x \leq 0 \\ 3(1-x) & \text{if } 0 < x \leq 1 \\ 0 & \text{otherwise} \end{cases}$$ 파동 함수 $\psi(x)$의 푸리에 변환 $\tilde{\psi}(k)$를 계산하고 푸리에 반전 정리, 즉 $\psi(x) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} \tilde{\psi}(k) e^{ikx} \mathrm{d}k$를 만족합니다.
DeepL:
> 입자의 파동 함수 $\psi(x)$는 $$\psi(x)=\begin{cases}로 주어집니다. 3x & \text{if } -1 \leq x \leq 0 \\ 3(1-x) & \text{if } 0 < x \leq 1 \\ 0 & \text{기타} \end{cases}$$ 파동 함수 $\psi(x)$의 푸리에 변환인 $\tilde{\psi}(k)$를 계산하고 푸리에 반전 정리, 즉 $\psi(x) = \frac{1}{\sqrt{2\pi}}를 만족함을 증명합니다. \int_{-\infty}^{\infty} \물결표{\psi}(k) e^{ikx} \mathrm{d}k$.
...and much more awesome cases with SQL query, code and markdown!
#### **How to**
**I highly recommend to inference model with vllm. I will write a guide for quick and easy inference if requested.**
Since, chat_template already contains insturction format above.
You can use the code below.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("kuotient/Seagull-13B-translation")
tokenizer = AutoTokenizer.from_pretrained("kuotient/Seagull-13B-translation")
messages = [
{"role": "system", "content", "주어진 문장을 한국어로 번역하세요."}
{"role": "user", "content": "Here are five examples of nutritious foods to serve your kids."},
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
``` |
Lewdiculous/Prima-LelantaclesV4-7b-16k-GGUF | Lewdiculous | 2024-02-25T04:15:13Z | 27 | 6 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"text-generation-inference",
"merge",
"mergekit",
"text-generation",
"region:us"
] | text-generation | 2024-02-20T16:15:49Z | ---
base_model:
- Test157t/Yarncules-7b-128k
- Test157t/Prima-LelantaclesV3-7b
library_name: transformers
tags:
- mistral
- quantized
- text-generation-inference
- merge
- mergekit
pipeline_tag: text-generation
inference: false
---
**GGUF quantizations for [Test157t/Prima-LelantaclesV4-7b-16k](https://huggingface.co/Test157t/Prima-LelantaclesV4-7b-16k).**
*If you want any specific quantization to be added, feel free to ask.*
All credits belong to the [creator](https://huggingface.co/Test157t/).
`Base⇢ GGUF(F16)⇢ GGUF(Quants)`
Using [llama.cpp](https://github.com/ggerganov/llama.cpp/)-[b2217](https://github.com/ggerganov/llama.cpp/releases/tag/b2217).
# Original model card:

The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Test157t/Yarncules-7b-128k
layer_range: [0, 32]
- model: Test157t/Prima-LelantaclesV3-7b
layer_range: [0, 32]
merge_method: slerp
base_model: Test157t/Prima-LelantaclesV3-7b
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
gsstein/model-75-percent-human-opt-og | gsstein | 2024-02-25T04:12:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-25T04:12:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
apriandito/sentiment-cnbc | apriandito | 2024-02-25T04:04:15Z | 108 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"autotrain",
"dataset:apriandito/autotrain-data-sentiment-cnbc",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-25T04:04:04Z |
---
tags:
- autotrain
- text-classification
widget:
- text: "I love AutoTrain"
datasets:
- apriandito/autotrain-data-sentiment-cnbc
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.2183430939912796
f1: 0.9439252336448598
precision: 0.9351851851851852
recall: 0.9528301886792453
auc: 0.9810430758276967
accuracy: 0.9433962264150944
|
Subsets and Splits