Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
fill-mask | transformers |
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | [More Information Needed] |
| Emissions (Co2eq in kg) | [More Information Needed] |
| CPU power (W) | [NO CPU] |
| GPU power (W) | [No GPU] |
| RAM power (W) | [More Information Needed] |
| CPU energy (kWh) | [No CPU] |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | [More Information Needed] |
| Consumed energy (kWh) | [More Information Needed] |
| Country name | [More Information Needed] |
| Cloud provider | [No Cloud] |
| Cloud region | [No Cloud] |
| CPU count | [No CPU] |
| CPU model | [No CPU] |
| GPU count | [No GPU] |
| GPU model | [No GPU] |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | [No CPU] |
| Emissions (Co2eq in kg) | [More Information Needed] |
## Note
30 April 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | BERTrand_bs16_lr6 |
| sequence_length | 400 |
| num_epoch | 12 |
| learning_rate | 5e-06 |
| batch_size | 16 |
| weight_decay | 0.0 |
| warm_up_prop | 0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 12597 |
## Training and Testing steps
Epoch | Train Loss | Test Loss
---|---|---
| 0.0 | 15.499363 | 15.126202 |
| 0.5 | 8.243008 | 8.091313 |
| 1.0 | 7.354550 | 7.973097 |
| 1.5 | 7.244512 | 7.687902 |
| 2.0 | 7.199471 | 7.686438 |
| 2.5 | 7.161911 | 7.657964 |
| 3.0 | 7.049606 | 7.765894 |
| 3.5 | 7.091233 | 7.786471 |
| 4.0 | 7.060327 | 7.814007 |
| 4.5 | 7.164975 | 7.602441 |
| 5.0 | 7.008690 | 7.586652 |
| 5.5 | 7.035732 | 7.525334 |
| 6.0 | 7.045719 | 7.715493 |
| 6.5 | 7.108394 | 7.486681 |
| 7.0 | 6.992529 | 7.641908 |
| 7.5 | 6.993111 | 7.575982 |
| 8.0 | 7.051304 | 7.670884 |
| 8.5 | 7.075128 | 7.527084 |
| 9.0 | 7.010871 | 7.556140 |
| 9.5 | 6.978617 | 7.623441 |
| 10.0 | 7.033166 | 7.562979 |
| 10.5 | 7.026733 | 7.549661 |
| 11.0 | 7.080350 | 7.510597 |
| 11.5 | 6.955319 | 7.589880 |
| {"language": "en", "tags": ["fill-mask"]} | damgomz/BERTrand_bs16_lr6 | null | [
"transformers",
"safetensors",
"albert",
"pretraining",
"fill-mask",
"en",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:29:56+00:00 |
null | null | {} | squaadinc/1714573769281x259531716233789440 | null | [
"region:us"
] | null | 2024-05-01T14:31:07+00:00 |
|
null | transformers |
# Uploaded model
- **Developed by:** felixml
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | felixml/Llama-3-8B-synthetic_text_to_sql-60-steps-q8_0-gguf | null | [
"transformers",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:31:09+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama3_on_scigen_v2
This model is a fine-tuned version of [unsloth/llama-3-8b-bnb-4bit](https://huggingface.co/unsloth/llama-3-8b-bnb-4bit) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 30
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.1.2
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "llama2", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "unsloth/llama-3-8b-bnb-4bit", "model-index": [{"name": "Llama3_on_scigen_v2", "results": []}]} | moetezsa/Llama3_on_scigen_v2 | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:llama2",
"region:us"
] | null | 2024-05-01T14:31:16+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_asr_mind_model
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the minds14 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- training_steps: 400
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2+cpu
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["minds14"], "base_model": "facebook/wav2vec2-base", "model-index": [{"name": "my_awesome_asr_mind_model", "results": []}]} | rahul9699/my_awesome_asr_mind_model | null | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:minds14",
"base_model:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:31:16+00:00 |
null | null | {} | Mais99/Summarization | null | [
"region:us"
] | null | 2024-05-01T14:31:17+00:00 |
|
null | null | {} | fiouf/aw | null | [
"region:us"
] | null | 2024-05-01T14:31:18+00:00 |
|
null | transformers |
# Uploaded model
- **Developed by:** felixml
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | felixml/Llama-3-8B-synthetic_text_to_sql-60-steps-lora | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:31:28+00:00 |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the funsd-layoutlmv3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5799
- Precision: 0.8808
- Recall: 0.9106
- F1: 0.8955
- Accuracy: 0.8507
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.3333 | 100 | 0.6686 | 0.7452 | 0.8251 | 0.7831 | 0.7535 |
| No log | 2.6667 | 200 | 0.4724 | 0.8064 | 0.8713 | 0.8376 | 0.8389 |
| No log | 4.0 | 300 | 0.4922 | 0.8612 | 0.8942 | 0.8774 | 0.8481 |
| No log | 5.3333 | 400 | 0.4632 | 0.8587 | 0.8997 | 0.8787 | 0.8521 |
| 0.544 | 6.6667 | 500 | 0.4850 | 0.8632 | 0.9031 | 0.8827 | 0.8474 |
| 0.544 | 8.0 | 600 | 0.5024 | 0.8744 | 0.8992 | 0.8866 | 0.8451 |
| 0.544 | 9.3333 | 700 | 0.5394 | 0.8768 | 0.9155 | 0.8957 | 0.8565 |
| 0.544 | 10.6667 | 800 | 0.5647 | 0.8800 | 0.9146 | 0.8970 | 0.8550 |
| 0.544 | 12.0 | 900 | 0.5798 | 0.8847 | 0.9106 | 0.8974 | 0.8545 |
| 0.1288 | 13.3333 | 1000 | 0.5799 | 0.8808 | 0.9106 | 0.8955 | 0.8507 |
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.1.1+cu118
- Datasets 2.15.0
- Tokenizers 0.19.1
| {"license": "cc-by-nc-sa-4.0", "tags": ["generated_from_trainer"], "datasets": ["funsd-layoutlmv3"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "microsoft/layoutlmv3-base", "model-index": [{"name": "test", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "funsd-layoutlmv3", "type": "funsd-layoutlmv3", "config": "funsd", "split": "test", "args": "funsd"}, "metrics": [{"type": "precision", "value": 0.8808265257087938, "name": "Precision"}, {"type": "recall", "value": 0.910581222056632, "name": "Recall"}, {"type": "f1", "value": 0.895456765999023, "name": "F1"}, {"type": "accuracy", "value": 0.8507072387970998, "name": "Accuracy"}]}]}]} | cor-c/test | null | [
"transformers",
"tensorboard",
"safetensors",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"dataset:funsd-layoutlmv3",
"base_model:microsoft/layoutlmv3-base",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:31:32+00:00 |
null | transformers | {} | Rasi1610/Death_Se44_newmodel_m11 | null | [
"transformers",
"pytorch",
"vision-encoder-decoder",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:31:56+00:00 |
|
text-generation | transformers |
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` | {"license": "other", "library_name": "transformers", "tags": ["autotrain", "text-generation-inference", "text-generation", "peft"], "widget": [{"messages": [{"role": "user", "content": "What is your favorite condiment?"}]}]} | ammar4567/FYP-finetune | null | [
"transformers",
"safetensors",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:32:25+00:00 |
null | null | {"license": "mit"} | OomDawie/Try | null | [
"license:mit",
"region:us"
] | null | 2024-05-01T14:33:10+00:00 |
|
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ppo_zephyr310
This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 7
- gradient_accumulation_steps: 32
- total_train_batch_size: 224
- total_eval_batch_size: 56
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "HuggingFaceH4/mistral-7b-sft-beta", "model-index": [{"name": "ppo_zephyr310", "results": []}]} | cleanrl/ppo_zephyr310 | null | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:HuggingFaceH4/mistral-7b-sft-beta",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T14:33:11+00:00 |
null | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# pretrained-bert
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 8.6929
- Validation Loss: 8.7752
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 0.001, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 8.6929 | 8.7752 | 0 |
### Framework versions
- Transformers 4.41.0.dev0
- TensorFlow 2.15.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["generated_from_keras_callback"], "model-index": [{"name": "pretrained-bert", "results": []}]} | Diluzx/pretrained-bert | null | [
"transformers",
"tf",
"bert",
"pretraining",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:34:37+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Reihaneh/wav2vec2_fy_nl_common_voice_17 | null | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:35:20+00:00 |
null | null | {} | squaadinc/1714574025432x722943452019425300 | null | [
"region:us"
] | null | 2024-05-01T14:35:22+00:00 |
|
null | null | {} | fiouf/zkoalsk | null | [
"region:us"
] | null | 2024-05-01T14:35:54+00:00 |
|
null | null | {} | squaadinc/chuck22222221 | null | [
"region:us"
] | null | 2024-05-01T14:37:03+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Poojithpoosa/myemotion_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.5775
- Validation Loss: 1.5589
- Train Accuracy: 0.3475
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 0.002, 'decay_steps': 5000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.5987 | 1.5625 | 0.3475 | 0 |
| 1.5827 | 1.5605 | 0.3475 | 1 |
| 1.5775 | 1.5589 | 0.3475 | 2 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.10.1
- Datasets 2.19.0
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "Poojithpoosa/myemotion_model", "results": []}]} | Poojithpoosa/myemotion_model | null | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:37:46+00:00 |
null | null | {} | squaadinc/t | null | [
"region:us"
] | null | 2024-05-01T14:37:52+00:00 |
|
null | null | {} | Weblet/Llama-2-7b-chat-hf-turbo1714573957555788_cognitivecomputations-Code-290k-ShareGPT-Vicuna | null | [
"region:us"
] | null | 2024-05-01T14:37:59+00:00 |
|
null | null | {} | manoj-dhakal/mistral_philosloppy | null | [
"region:us"
] | null | 2024-05-01T14:38:11+00:00 |
|
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Waktaverse-Llama-3-KO-8B-Instruct - bnb 4bits
- Model creator: https://huggingface.co/PathFinderKR/
- Original model: https://huggingface.co/PathFinderKR/Waktaverse-Llama-3-KO-8B-Instruct/
Original model description:
---
language:
- ko
- en
license: llama3
library_name: transformers
datasets:
- MarkrAI/KoCommercial-Dataset
---
# Waktaverse-Llama-3-KO-8B-Instruct Model Card
## Model Details

Waktaverse-Llama-3-KO-8B-Instruct is a state-of-the-art Korean language model developed by Waktaverse AI team.
This large language model is a specialized version of the Meta-Llama-3-8B-Instruct, tailored for Korean natural language processing tasks.
It is designed to handle a variety of complex instructions and generate coherent, contextually appropriate responses.
- **Developed by:** Waktaverse AI
- **Model type:** Large Language Model
- **Language(s) (NLP):** Korean, English
- **License:** [Llama3](https://llama.meta.com/llama3/license)
- **Finetuned from model:** [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
## Model Sources
- **Repository:** [GitHub](https://github.com/PathFinderKR/Waktaverse-LLM/tree/main)
- **Paper :** [More Information Needed]
## Uses
### Direct Use
The model can be utilized directly for tasks such as text completion, summarization, and question answering without any fine-tuning.
### Out-of-Scope Use
This model is not intended for use in scenarios that involve high-stakes decision-making including medical, legal, or safety-critical areas due to the potential risks of relying on automated decision-making.
Moreover, any attempt to deploy the model in a manner that infringes upon privacy rights or facilitates biased decision-making is strongly discouraged.
## Bias, Risks, and Limitations
While Waktaverse Llama 3 is a robust model, it shares common limitations associated with machine learning models including potential biases in training data, vulnerability to adversarial attacks, and unpredictable behavior under edge cases.
There is also a risk of cultural and contextual misunderstanding, particularly when the model is applied to languages and contexts it was not specifically trained on.
## How to Get Started with the Model
You can run conversational inference using the Transformers Auto classes.
We highly recommend that you add Korean system prompt for better output.
Adjust the hyperparameters as you need.
### Example Usage
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = (
"cuda:0" if torch.cuda.is_available() else # Nvidia GPU
"mps" if torch.backends.mps.is_available() else # Apple Silicon GPU
"cpu"
)
model_id = "PathFinderKR/Waktaverse-Llama-3-KO-8B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device=device,
)
################################################################################
# Generation parameters
################################################################################
num_return_sequences=1
max_new_tokens=1024
temperature=0.9
top_k=40
top_p=0.9
repetition_penalty=1.1
def generate_response(system ,user):
messages = [
{"role": "system", "content": system},
{"role": "user", "content": user}
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=False
)
input_ids = tokenizer.encode(
prompt,
add_special_tokens=False,
return_tensors="pt"
).to(device)
outputs = model.generate(
input_ids=input_ids,
pad_token_id=tokenizer.eos_token_id,
num_return_sequences=num_return_sequences,
max_new_tokens=max_new_tokens,
do_sample=True,
temperature=temperature,
top_k=top_k,
top_p=top_p,
repetition_penalty=repetition_penalty
)
return tokenizer.decode(outputs[0], skip_special_tokens=False)
system_prompt = "다음 지시사항에 대한 응답을 작성해주세요."
user_prompt = "피보나치 수열에 대해 설명해주세요."
response = generate_response(system_prompt, user_prompt)
print(response)
```
### Example Output
```python
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
다음 지시사항에 대한 응답을 작성해주세요.<|eot_id|><|start_header_id|>user<|end_header_id|>
피보나치 수열에 대해 설명해주세요.<|eot_id|><|start_header_id|>assistant<|end_header_id|>
피보나치 수열은 수학에서 가장 유명한 수열 중 하나로, 0과 1로 시작하는 숫자들의 모임입니다. 각 숫자는 이전 두 개의 숫자의 합으로 정의되며, 이렇게 계속 반복됩니다. 피보나치 수열은 무한히 커지는데, 첫 번째와 두 번째 항이 모두 0일 수도 있지만 일반적으로는 첫 번째 항이 1이고 두 번째 항이 1입니다.
예를 들어, 0 + 1 = 1, 1 + 1 = 2, 2 + 1 = 3, 3 + 2 = 5, 5 + 3 = 8, 8 + 5 = 13, 13 + 8 = 21, 21 + 13 = 34 등이 있습니다. 이 숫자들을 피보나치 수열이라고 합니다.
피보나치 수열은 다른 수열들과 함께 사용될 때 도움이 됩니다. 예를 들어, 금융 시장에서는 금리 수익률을 나타내기 위해 이 수열이 사용됩니다. 또한 컴퓨터 과학과 컴퓨터 과학에서도 종종 찾을 수 있습니다. 피보나치 수열은 매우 복잡하며 많은 숫자가 나오므로 일반적인 수열처럼 쉽게 구할 수 없습니다. 이 때문에 피보나치 수열은 대수적 함수와 관련이 있으며 수학자들은 이를 연구하고 계산하기 위해 다양한 알고리즘을 개발했습니다.
참고 자료: https://en.wikipedia.org/wiki/Fibonacci_sequence#Properties.<|eot_id|>
```
## Training Details
### Training Data
The model is trained on the [MarkrAI/KoCommercial-Dataset](https://huggingface.co/datasets/MarkrAI/KoCommercial-Dataset), which consists of various commercial texts in Korean.
### Training Procedure
The model training used LoRA for computational efficiency. 0.02 billion parameters(0.26% of total parameters) were trained.
#### Training Hyperparameters
```python
################################################################################
# bitsandbytes parameters
################################################################################
load_in_4bit=True
bnb_4bit_compute_dtype=torch_dtype
bnb_4bit_quant_type="nf4"
bnb_4bit_use_double_quant=False
################################################################################
# LoRA parameters
################################################################################
task_type="CAUSAL_LM"
target_modules=["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"]
r=8
lora_alpha=16
lora_dropout=0.05
bias="none"
################################################################################
# TrainingArguments parameters
################################################################################
num_train_epochs=1
per_device_train_batch_size=1
per_device_eval_batch_size=2
gradient_accumulation_steps=4
gradient_checkpointing=True
learning_rate=2e-5
lr_scheduler_type="cosine"
warmup_ratio=0.1
weight_decay=0.1
################################################################################
# SFT parameters
################################################################################
max_seq_length=1024
packing=True
```
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Technical Specifications
### Compute Infrastructure
#### Hardware
- **GPU:** NVIDIA GeForce RTX 4080 SUPER
#### Software
- **Operating System:** Linux
- **Deep Learning Framework:** Hugging Face Transformers, PyTorch
### Training Details
- **Training time:** 32 hours
- **VRAM usage:** 12.8 GB
- **GPU power usage:** 300 W
## Citation
**Waktaverse-Llama-3**
```
TBD
```
**Llama-3**
```
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
```
## Model Card Authors
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {} | RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-4bits | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-05-01T14:38:20+00:00 |
null | null | {} | wardkerckhof/my_awesome_swag_model | null | [
"region:us"
] | null | 2024-05-01T14:38:30+00:00 |
|
text-generation | transformers |
## Llama-3-KoEn-8B-Instruct-preview
> Update @ 2024.05.01: Pre-Release [Llama-3-KoEn-8B model](https://huggingface.co/beomi/Llama-3-KoEn-8B-preview) & [Llama-3-KoEn-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-KoEn-8B-Instruct-preview)
## Model Details
**Llama-3-KoEn-8B-Instruct-preview**
Llama-3-KoEn-8B model is continued pretrained language model based on Llama-3-8B.
The train was done on TPUv4-256, with the warm support from TRC program by Google.
With applying the idea from [Chat Vector paper](https://arxiv.org/abs/2310.04799),
I released Instruction model named [Llama-3-KoEn-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-KoEn-8B-Instruct-preview).
Since it is NOT finetuned with any Korean instruction set(indeed `preview`), but it would be great starting point for creating new Chat/Instruct models.
**Sample usage**
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "beomi/Llama-3-KoEn-8B-Instruct-preview"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype="auto",
device_map="auto",
)
messages = [
{"role": "system", "content": "친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘."},
{"role": "user", "content": "피보나치 수열이 뭐야? 그리고 피보나치 수열에 대해 파이썬 코드를 짜줘볼래?"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=512,
eos_token_id=terminators,
do_sample=True,
temperature=1,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
**Sample Output**
피보나치 수열은 이탈리아 수학자 레오나르도 피보나치의 이름을 따서 유래된 수열입니다. 피보나치 수열은 특정 조건에 따라 얻어지는 등차수열로, 다음 수를 구하는 단순한 규칙에 의해 생성되는 수열입니다.
1, 1, 2, 3, 5, 8, 13, 21,... (등차 수열)
이러한 수열은 자연 세계에도 많이 발견됩니다. 예를 들어, 새싹이 자라는 방식, 달걀이 부화되는 횟수, 코끼리의 생장 등이 피보나치 수열을 따르기 때문입니다. 🌱🦆🐮
지금부터는 파이썬 코드를 작성해 볼까요? 😊
먼저, 파이썬 코드를 작성하기 위해 필요한 모듈은 없습니다. 저희는 피보나치 수열을 생성하는 알고리즘을 작성할 것입니다.
1. 파이썬 코드 생성
다음은 피보나치 수열을 생성하는 코드입니다. 이 코드는 `while` 루프를 사용하여 피보나치 수열을 생성합니다.
```python
def fib(n):
if n <= 1:
return n
a, b = 0, 1
for _ in range(2, n+1):
a, b = b, a + b
return b
n = int(input("피보나치 수열을 얼마나 생성할지 입력하세요: "))
for i in range(n):
print(fib(i+1))
```
2. 코드 설명
위 코드는 `fib(n)` 함수를 정의합니다. 이 함수는 `n`번째 피보나치 수를 반환합니다.
`fib(n)` 함수는 `if` 조건문을 사용하여 `n`이 0 또는 1일 경우 해당 값은 즉시 반환합니다.
`a, b = 0, 1`은 피보나치 수열의 초기 값을 설정합니다. `a`는 `0`을, `b`는 `1`을 할당합니다.
`for` 루프는 `range(2, n+1)`에서 생성되는 반복을 수행합니다. 이 반복은 `a, b = b, a + b`를 수행하여 피보나치 수열을 생성합니다.
`print(fib(i+1))`는 `fib(n)` 함수 호출 결과를 출력합니다.
3. 코드 실행
위 코드를 실행하기 위해 파이썬 인터프레터에서 다음과 같이 실행할 수 있습니다.
```python
$ python fib.py
피보나치 수열을 얼마나 생성할지 입력하세요: 10
1
1
2
3
5
8
13
21
34
55
```
위 코드를 실행하면, 사용자는 `피보나치 수열을 얼마나 생성할지 입력하세요:`를 입력할 수 있습니다. 이 입력은 `n`에 할당됩니다. `n`은 코드가 피보나치 수열을 생성하는 횟수를 결정하는 변수입니다. 사용자는 이 횟수를 입력하여 피보나치 수열을 생성할 수 있습니다.
이상 파이썬 코드를 작성하고 실행하는 방법을 설명했습니다. 😊
### More samples
> I used same system prompt, but you could change on your own.
```python
messages = [
{"role": "system", "content": "친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘."},
{"role": "user", "content": "한국의 제헌 헌법에 대해서 자세히 설명해주세요."},
]
---
대한민국 헌법은 1948년 7월 17일 제정된 이래 현재까지 대한민국의 기본법인 국가의 기본질서와 국가기관의 권한, 국민의 권리·의무, 통치조직, 통치방법 등을 규정하는 법률입니다. 헌법은 대한민국의 최고법령으로, 일반법상 모든 법률의 근거가 됩니다. 또한 헌법은 1987년 제정된 제6공화국 헌법부터는 '민주공화국'을 기본질서로 규정하고 있습니다.
** 주요 내용 **
1. 기본질서 : 민주공화국, 통일조국의 원칙, 인권의 보장
2. 국민의 권리 : 신체의 자유, 기본적 인권, 재산권, 참정권
3. 국가기관 : 대통령, 국무총리, 국회, 대법원, 헌법재판소, 검찰
4. 통치조직 : 행정부, 입법부, 사법부의 분리
5. 통치방법 : 민주주의(국민의 주권에 의하여 국민으로부터 위임받은 권력 행사), 법치주의(법률에 의하여 통치)
** 주요 조항 **
1. 제1조 : 대한민국은 민주공화국임을 천명
2. 제2조 : 민주공화국의 주권은 국민에게 있으며, 모든 권력은 국민으로부터 나온다
3. 제10조 : 모든 국민은 행복을 추구할 권리를 가진다
4. 제13조 : 모든 국민은 주거의 자유를 갖는다
5. 제25조 : 국가는 재해를 방지하고 그에 대처하는 시책을 시행하여야 한다
6. 제64조 : 대통령은 법률을 공포하고, 국무총리와 국무위원을 임명한다
7. 제111조 : 대법원은 최종적인 법률심판을 행하고, 헌법재판소는 헌법에 위반되는지 여부를 결정
** 헌법의 효력 **
헌법은 대한민국의 기본법률이므로 모든 국민은 헌법에 따라야 합니다. 또한 헌법은 변경하려면 헌법개정안을 제안하고, 이를 국민투표에 붙여야 하며, 이를 통과하면 새 헌법을 제정하게 됩니다. (제130조)
이상, 대한민국 헌법의 주요 내용과 조항, 효력을 설명하였습니다. 한국 헌법은 민주공화국의 기본질서를 중시하고, 기본권을 보호할 뿐만 아니라, 통치조직의 분리, 법치주의, 민주주의를 근간으로 합니다. 🇰🇷
```
```python
messages = [
{"role": "system", "content": "친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘."},
{"role": "user", "content": "이순신 장군에 대해 설명해줘."},
]
---
이순신 장군은 조선의 문신, 무신, 해군의 영웅으로 1592년부터 1598년까지 임진왜란과 정유재란 동안 해군으로서 활약했습니다. 그는 1591년 전라좌도 수군절도사가 되어 선제공격 전통을 이어받은 바, '신속성'과 '신용', '도탄전' 등의 병법을 도입하여 조선 수군을 개편했습니다. 이순신 장군의 가장 큰 업적으로는 두 가지를 꼽을 수 있습니다. 첫 번째는 '한산도 대첩'입니다. 1592년 7월 8일, 이순신 장군은 이끄는 조선 수군 13척이 왜군 133척에 맞서 13척의 배로는 불가능한 전쟁을 벌이게 됩니다. 이때 이순신 장군은 '진퇴유곡'의 전술을 사용하여 13척의 배를 133척의 배와 싸워 승리를 거두었습니다. 두 번째는 '노량해전'입니다. 1598년 11월 14일, 이순신 장군은 노량에서 12척의 배로 330척의 배를 이끄는 와키자카 야스하루와 싸우게 됩니다. 이때 이순신 장군의 배에 명중된 총알이 1발이었고, 12척의 배가 고전을 면치 못했지만, 이순신 장군은 끝까지 싸워 2명을 제외하고는 전원 전사하게 됩니다. 이순신 장군의 전투를 비롯하여 그의 업적은 조선 수군의 전투 승리를 이끌었으며, 그의 이름은 한국의 국보 제76호로 등재되어 있습니다. 이순신 장군의 업적과 정신은 한국 역사와 문화에 지대한 영향을 끼쳤습니다.
```
```python
messages = [
{"role": "system", "content": "친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘."},
{"role": "user", "content": "독도와 다케시마의 차이가 뭐야?"},
]
---
! `한국의 고유영토`와 `일본의 영유권 과장` 차이! 🌊
일본이 주장하는 `다케시마(竹島)`는, 실제로는 '울릉도' 서쪽 약 87km 지점에 있는 '독도(獨島)'를 가리킵니다. 한국이 역사적으로부터 지배하고 관리해온 것은 물론, 유네스코 세계유산으로 등재된 `대한민국의 영토`이지요. 😎
한편, 독도는 한국의 고유 이름으로, `울릉도`와 함께 `삼국사기`에 처음 기재된 `대한민국의 고유 섬`입니다. 1886년 일본이 '다케시마'라고 명명한 이래, 공식적으로 `일본 고유 섬`으로 주장하고 있지만, 이는 `역사 왜곡`입니다! ⚠️
따라서, 세계적으로 인정받는 국가의 `영토`는 한국의 `독도`가 맞는 것이지요! 🔥 `울릉도`도 `독도`도 한국의 고유 영토에 속한 섬들이랍니다! 💕
혹시 더 궁금한 사실이 있으면 말씀해주세요! 🤔
```
```python
messages = [
{"role": "system", "content": "친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘."},
{"role": "user", "content": "세종대왕의 집현전 맥북 던짐 사건에 대해서 알고 있어?"},
]
---
아주 흥미로운 사건입니다. 세종대왕이 집현전에서 맥북을 던졌다는 이야기는 없습니다. 실제로 세종대왕은 1418년 1월 9일 집현전에서 26명의 문신과 함께 새로운 훈민정음 반포를 위해 간자체의 글씨를 사용하는 것을 반대한 홍윤성 등의 무리와의 논쟁을 벌였습니다. 당시 세종대왕은 이들의 주장을 단호히 거부했습니다. 이 사건은 '집현전 논박 사건'으로 알려져 있습니다. 맥북과 관련된 일이 아니라는 것이죠. 맥북은 2006년 애플이 출시한 노트북의 브랜드명입니다. 세종대왕이 살았던 시기는 1418년에서 1450년까지입니다. 맥북은 훨씬 뒤의 일입니다. 😅
```
```python
messages = [
{"role": "system", "content": "친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘."},
{"role": "user", "content": "나는 오늘 사과 5개를 갖고있어. 그리고 어제 나는 사과 2개를 먹었지. 내가 지금 가진 사과는 몇 개지?"},
]
---
👀 당신은 오늘 5개의 사과를 갖고 있습니다. 그리고 어제 2개를 먹었다고 하셨으니, 오늘 현재 당신이 갖고 있는 사과는 5개에서 2개를 뺀 3개입니다! 😊
```
| {"language": ["en", "ko"], "license": "cc-by-nc-sa-4.0", "tags": ["facebook", "meta", "pytorch", "llama", "llama-3", "llama-3-ko"], "pipeline_tag": "text-generation", "license_name": "llama3", "license_link": "LICENSE"} | beomi/Llama-3-KoEn-8B-Instruct-preview | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-3",
"llama-3-ko",
"conversational",
"en",
"ko",
"arxiv:2310.04799",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T14:39:09+00:00 |
null | null |
## Introduce
Quantizing the [gradientai/Llama-3-8B-Instruct-Gradient-1048k](https://huggingface.co/gradientai/Llama-3-8B-Instruct-Gradient-1048k) to f16, q2, q3, q4, q5, q6 and q8 with Llama.cpp.
| {"license": "apache-2.0"} | Monor/Llama-3-8B-Instruct-Gradient-1048k-gguf | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-01T14:40:45+00:00 |
null | null | {} | Weblet/Llama-2-7b-chat-hf-turbo17145743876012664_cognitivecomputations-Code-290k-ShareGPT-Vicun | null | [
"region:us"
] | null | 2024-05-01T14:40:52+00:00 |
|
null | null |
## Introduce
Quantizing the [shibing624/llama-3-8b-instruct-262k-chinese](https://huggingface.co/shibing624/llama-3-8b-instruct-262k-chinese) to f16, q2, q3, q4, q5, q6 and q8 with Llama.cpp.
| {"license": "apache-2.0"} | Monor/llama-3-8b-instruct-262k-chinese-gguf | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-01T14:41:02+00:00 |
null | null |
## Introduce
Quantizing the [UnicomLLM/Unichat-llama3-Chinese-8B-28K](https://huggingface.co/UnicomLLM/Unichat-llama3-Chinese-8B-28K) to f16, q2, q3, q4, q5, q6 and q8 with Llama.cpp.
| {"license": "apache-2.0"} | Monor/Unichat-llama3-Chinese-8B-28K-gguf | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-01T14:41:18+00:00 |
null | null |
## Introduce
Quantizing the [namespace-Pt/Llama-3-8B-Instruct-80K-QLoRA](https://huggingface.co/namespace-Pt/Llama-3-8B-Instruct-80K-QLoRA) to f16, q2, q3, q4, q5, q6 and q8 with Llama.cpp.
| {"license": "apache-2.0"} | Monor/Llama-3-8B-Instruct-80K-QLoRA-gguf | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-01T14:41:33+00:00 |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
sn666 - bnb 4bits
- Model creator: https://huggingface.co/RobertML/
- Original model: https://huggingface.co/RobertML/sn666/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {} | RichardErkhov/RobertML_-_sn666-4bits | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"region:us"
] | null | 2024-05-01T14:42:06+00:00 |
null | null |
# cleatherbury/rocket-3B-Q6_K-GGUF
This model was converted to GGUF format from [`pansophic/rocket-3B`](https://huggingface.co/pansophic/rocket-3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/pansophic/rocket-3B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo cleatherbury/rocket-3B-Q6_K-GGUF --model rocket-3b.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo cleatherbury/rocket-3B-Q6_K-GGUF --model rocket-3b.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m rocket-3b.Q6_K.gguf -n 128
```
| {"language": ["en"], "license": "cc-by-sa-4.0", "tags": ["llama-cpp", "gguf-my-repo"], "base_model": "stabilityai/stablelm-3b-4e1t", "model-index": [{"name": "rocket-3b", "results": []}]} | cleatherbury/rocket-3B-Q6_K-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:stabilityai/stablelm-3b-4e1t",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2024-05-01T14:42:35+00:00 |
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deit-base-patch16-224-finetuned-footulcer
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0452
- Accuracy: 0.9914
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.97 | 8 | 0.2488 | 0.8966 |
| 0.5125 | 1.94 | 16 | 0.1675 | 0.9310 |
| 0.2843 | 2.91 | 24 | 0.0679 | 0.9828 |
| 0.1876 | 4.0 | 33 | 0.0452 | 0.9914 |
| 0.1566 | 4.85 | 40 | 0.0389 | 0.9914 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "facebook/deit-base-patch16-224", "model-index": [{"name": "deit-base-patch16-224-finetuned-footulcer", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9913793103448276, "name": "Accuracy"}]}]}]} | Nitish2801/deit-base-patch16-224-finetuned-footulcer | null | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:43:14+00:00 |
text-generation | transformers |

# Meta-Llama-3-120B-Instruct
Meta-Llama-3-120B-Instruct is a self-merge with [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct).
## 🧩 Configuration
```yaml
slices:
- sources:
- layer_range: [0, 20]
model: meta-llama/Meta-Llama-3-70B-Instruct
- sources:
- layer_range: [10, 30]
model: meta-llama/Meta-Llama-3-70B-Instruct
- sources:
- layer_range: [20, 40]
model: meta-llama/Meta-Llama-3-70B-Instruct
- sources:
- layer_range: [30, 50]
model: meta-llama/Meta-Llama-3-70B-Instruct
- sources:
- layer_range: [40, 60]
model: meta-llama/Meta-Llama-3-70B-Instruct
- sources:
- layer_range: [50, 70]
model: meta-llama/Meta-Llama-3-70B-Instruct
- sources:
- layer_range: [60, 80]
model: meta-llama/Meta-Llama-3-70B-Instruct
merge_method: passthrough
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/Llama-3-120B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"license": "other", "tags": ["merge", "mergekit", "lazymergekit"], "base_model": ["meta-llama/Meta-Llama-3-70B-Instruct", "meta-llama/Meta-Llama-3-70B-Instruct", "meta-llama/Meta-Llama-3-70B-Instruct", "meta-llama/Meta-Llama-3-70B-Instruct", "meta-llama/Meta-Llama-3-70B-Instruct", "meta-llama/Meta-Llama-3-70B-Instruct", "meta-llama/Meta-Llama-3-70B-Instruct"]} | mlabonne/Meta-Llama-3-120B-Instruct | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"conversational",
"base_model:meta-llama/Meta-Llama-3-70B-Instruct",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T14:43:27+00:00 |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | nlpproject/IntentClassification_V3 | null | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:43:35+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Poojithpoosa/fakenew_model
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1306
- Validation Loss: 0.0752
- Train Accuracy: 0.9813
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 121765, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.1150 | 0.0478 | 0.9805 | 0 |
| 0.1348 | 0.0933 | 0.9800 | 1 |
| 0.1306 | 0.0752 | 0.9813 | 2 |
### Framework versions
- Transformers 4.40.1
- TensorFlow 2.15.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "google-bert/bert-base-uncased", "model-index": [{"name": "Poojithpoosa/fakenew_model", "results": []}]} | Poojithpoosa/fakenew_model | null | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:43:42+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Ehsan-Tavan/Generative-AV-Mistral-v0.1-7b | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:44:03+00:00 |
null | null | {} | GIIS/XCross | null | [
"region:us"
] | null | 2024-05-01T14:44:08+00:00 |
|
null | keras |
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | False |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
| {"library_name": "keras"} | robpetrosino/apziva-monreader-classifier | null | [
"keras",
"has_space",
"region:us"
] | null | 2024-05-01T14:44:29+00:00 |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-medium-finetuned-TS
This model is a fine-tuned version of [openai-community/gpt2-medium](https://huggingface.co/openai-community/gpt2-medium) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3209
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 91 | 4.3816 |
| No log | 2.0 | 182 | 4.0213 |
| No log | 3.0 | 273 | 4.3209 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "openai-community/gpt2-medium", "model-index": [{"name": "gpt2-medium-finetuned-TS", "results": []}]} | joaohonorato/gpt2-medium-finetuned-TS | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2-medium",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T14:44:56+00:00 |
null | null | {} | Weblet/Llama-2-7b-chat-hf-turbo1714574640219454_cognitivecomputations-Code-290k-ShareGPT-Vicuna | null | [
"region:us"
] | null | 2024-05-01T14:45:14+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nash_dpo_merge_iter_3
This model is a fine-tuned version of [YYYYYYibo/nash_dpo_merge_iter_2](https://huggingface.co/YYYYYYibo/nash_dpo_merge_iter_2) on the updated and the original datasets.
It achieves the following results on the evaluation set:
- Loss: 0.5506
- Rewards/chosen: -0.4423
- Rewards/rejected: -0.9695
- Rewards/accuracies: 0.7060
- Rewards/margins: 0.5272
- Logps/rejected: -386.9142
- Logps/chosen: -353.7201
- Logits/rejected: 0.3453
- Logits/chosen: -0.2965
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.5641 | 0.49 | 100 | 0.5612 | -0.5147 | -0.9965 | 0.7020 | 0.4819 | -389.6182 | -360.9578 | 0.4191 | -0.1955 |
| 0.5515 | 0.98 | 200 | 0.5506 | -0.4423 | -0.9695 | 0.7060 | 0.5272 | -386.9142 | -353.7201 | 0.3453 | -0.2965 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo"], "datasets": ["updated", "original"], "base_model": "alignment-handbook/zephyr-7b-sft-full", "model-index": [{"name": "nash_dpo_merge_iter_3", "results": []}]} | YYYYYYibo/nash_dpo_merge_iter_3 | null | [
"peft",
"safetensors",
"mistral",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"dataset:updated",
"dataset:original",
"base_model:alignment-handbook/zephyr-7b-sft-full",
"license:apache-2.0",
"region:us"
] | null | 2024-05-01T14:45:26+00:00 |
null | null | {} | Mimosa77/t5-small-finetuned-gloss-translation-0501 | null | [
"region:us"
] | null | 2024-05-01T14:45:29+00:00 |
|
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
sn666 - bnb 8bits
- Model creator: https://huggingface.co/RobertML/
- Original model: https://huggingface.co/RobertML/sn666/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {} | RichardErkhov/RobertML_-_sn666-8bits | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"region:us"
] | null | 2024-05-01T14:46:09+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["llama-factory"]} | Coconuty/health_v1 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T14:46:38+00:00 |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Waktaverse-Llama-3-KO-8B-Instruct - bnb 8bits
- Model creator: https://huggingface.co/PathFinderKR/
- Original model: https://huggingface.co/PathFinderKR/Waktaverse-Llama-3-KO-8B-Instruct/
Original model description:
---
language:
- ko
- en
license: llama3
library_name: transformers
datasets:
- MarkrAI/KoCommercial-Dataset
---
# Waktaverse-Llama-3-KO-8B-Instruct Model Card
## Model Details

Waktaverse-Llama-3-KO-8B-Instruct is a state-of-the-art Korean language model developed by Waktaverse AI team.
This large language model is a specialized version of the Meta-Llama-3-8B-Instruct, tailored for Korean natural language processing tasks.
It is designed to handle a variety of complex instructions and generate coherent, contextually appropriate responses.
- **Developed by:** Waktaverse AI
- **Model type:** Large Language Model
- **Language(s) (NLP):** Korean, English
- **License:** [Llama3](https://llama.meta.com/llama3/license)
- **Finetuned from model:** [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
## Model Sources
- **Repository:** [GitHub](https://github.com/PathFinderKR/Waktaverse-LLM/tree/main)
- **Paper :** [More Information Needed]
## Uses
### Direct Use
The model can be utilized directly for tasks such as text completion, summarization, and question answering without any fine-tuning.
### Out-of-Scope Use
This model is not intended for use in scenarios that involve high-stakes decision-making including medical, legal, or safety-critical areas due to the potential risks of relying on automated decision-making.
Moreover, any attempt to deploy the model in a manner that infringes upon privacy rights or facilitates biased decision-making is strongly discouraged.
## Bias, Risks, and Limitations
While Waktaverse Llama 3 is a robust model, it shares common limitations associated with machine learning models including potential biases in training data, vulnerability to adversarial attacks, and unpredictable behavior under edge cases.
There is also a risk of cultural and contextual misunderstanding, particularly when the model is applied to languages and contexts it was not specifically trained on.
## How to Get Started with the Model
You can run conversational inference using the Transformers Auto classes.
We highly recommend that you add Korean system prompt for better output.
Adjust the hyperparameters as you need.
### Example Usage
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = (
"cuda:0" if torch.cuda.is_available() else # Nvidia GPU
"mps" if torch.backends.mps.is_available() else # Apple Silicon GPU
"cpu"
)
model_id = "PathFinderKR/Waktaverse-Llama-3-KO-8B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device=device,
)
################################################################################
# Generation parameters
################################################################################
num_return_sequences=1
max_new_tokens=1024
temperature=0.9
top_k=40
top_p=0.9
repetition_penalty=1.1
def generate_response(system ,user):
messages = [
{"role": "system", "content": system},
{"role": "user", "content": user}
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=False
)
input_ids = tokenizer.encode(
prompt,
add_special_tokens=False,
return_tensors="pt"
).to(device)
outputs = model.generate(
input_ids=input_ids,
pad_token_id=tokenizer.eos_token_id,
num_return_sequences=num_return_sequences,
max_new_tokens=max_new_tokens,
do_sample=True,
temperature=temperature,
top_k=top_k,
top_p=top_p,
repetition_penalty=repetition_penalty
)
return tokenizer.decode(outputs[0], skip_special_tokens=False)
system_prompt = "다음 지시사항에 대한 응답을 작성해주세요."
user_prompt = "피보나치 수열에 대해 설명해주세요."
response = generate_response(system_prompt, user_prompt)
print(response)
```
### Example Output
```python
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
다음 지시사항에 대한 응답을 작성해주세요.<|eot_id|><|start_header_id|>user<|end_header_id|>
피보나치 수열에 대해 설명해주세요.<|eot_id|><|start_header_id|>assistant<|end_header_id|>
피보나치 수열은 수학에서 가장 유명한 수열 중 하나로, 0과 1로 시작하는 숫자들의 모임입니다. 각 숫자는 이전 두 개의 숫자의 합으로 정의되며, 이렇게 계속 반복됩니다. 피보나치 수열은 무한히 커지는데, 첫 번째와 두 번째 항이 모두 0일 수도 있지만 일반적으로는 첫 번째 항이 1이고 두 번째 항이 1입니다.
예를 들어, 0 + 1 = 1, 1 + 1 = 2, 2 + 1 = 3, 3 + 2 = 5, 5 + 3 = 8, 8 + 5 = 13, 13 + 8 = 21, 21 + 13 = 34 등이 있습니다. 이 숫자들을 피보나치 수열이라고 합니다.
피보나치 수열은 다른 수열들과 함께 사용될 때 도움이 됩니다. 예를 들어, 금융 시장에서는 금리 수익률을 나타내기 위해 이 수열이 사용됩니다. 또한 컴퓨터 과학과 컴퓨터 과학에서도 종종 찾을 수 있습니다. 피보나치 수열은 매우 복잡하며 많은 숫자가 나오므로 일반적인 수열처럼 쉽게 구할 수 없습니다. 이 때문에 피보나치 수열은 대수적 함수와 관련이 있으며 수학자들은 이를 연구하고 계산하기 위해 다양한 알고리즘을 개발했습니다.
참고 자료: https://en.wikipedia.org/wiki/Fibonacci_sequence#Properties.<|eot_id|>
```
## Training Details
### Training Data
The model is trained on the [MarkrAI/KoCommercial-Dataset](https://huggingface.co/datasets/MarkrAI/KoCommercial-Dataset), which consists of various commercial texts in Korean.
### Training Procedure
The model training used LoRA for computational efficiency. 0.02 billion parameters(0.26% of total parameters) were trained.
#### Training Hyperparameters
```python
################################################################################
# bitsandbytes parameters
################################################################################
load_in_4bit=True
bnb_4bit_compute_dtype=torch_dtype
bnb_4bit_quant_type="nf4"
bnb_4bit_use_double_quant=False
################################################################################
# LoRA parameters
################################################################################
task_type="CAUSAL_LM"
target_modules=["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"]
r=8
lora_alpha=16
lora_dropout=0.05
bias="none"
################################################################################
# TrainingArguments parameters
################################################################################
num_train_epochs=1
per_device_train_batch_size=1
per_device_eval_batch_size=2
gradient_accumulation_steps=4
gradient_checkpointing=True
learning_rate=2e-5
lr_scheduler_type="cosine"
warmup_ratio=0.1
weight_decay=0.1
################################################################################
# SFT parameters
################################################################################
max_seq_length=1024
packing=True
```
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Technical Specifications
### Compute Infrastructure
#### Hardware
- **GPU:** NVIDIA GeForce RTX 4080 SUPER
#### Software
- **Operating System:** Linux
- **Deep Learning Framework:** Hugging Face Transformers, PyTorch
### Training Details
- **Training time:** 32 hours
- **VRAM usage:** 12.8 GB
- **GPU power usage:** 300 W
## Citation
**Waktaverse-Llama-3**
```
TBD
```
**Llama-3**
```
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
```
## Model Card Authors
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {} | RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-8bits | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-05-01T14:48:33+00:00 |
null | null | {"license": "openrail"} | ariscult/arianagrandeerasrvc | null | [
"license:openrail",
"region:us"
] | null | 2024-05-01T14:48:54+00:00 |
|
null | transformers |
# Uploaded model
- **Developed by:** Cognitus-Stuti
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"} | Cognitus-Stuti/llama3-8b-gguf | null | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:49:39+00:00 |
text-to-image | diffusers | {} | GraydientPlatformAPI/uncanny3-pony-xl | null | [
"diffusers",
"safetensors",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | null | 2024-05-01T14:50:23+00:00 |
|
null | null | {} | manueldeprada/llama-63B-multisource | null | [
"region:us"
] | null | 2024-05-01T14:50:29+00:00 |
|
null | null | {} | Weblet/Llama-2-7b-chat-hf-turbo17145750649055543_cognitivecomputations-Code-290k-ShareGPT-Vicun | null | [
"region:us"
] | null | 2024-05-01T14:52:19+00:00 |
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | manueldeprada/llama-66M-multisource | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T14:52:20+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# framing_classification_longformer_30_augmented_multi
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3585
- Accuracy: 0.6170
- F1: 0.1988
- Precision: 0.2058
- Recall: 0.2476
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Accuracy | F1 | Validation Loss | Precision | Recall |
|:-------------:|:-----:|:------:|:--------:|:------:|:---------------:|:---------:|:------:|
| 1.282 | 1.0 | 7043 | 0.5773 | 0.1691 | 1.5099 | 0.1473 | 0.2326 |
| 2.5135 | 2.0 | 14086 | 0.5455 | 0.1008 | 2.2716 | 0.0779 | 0.1429 |
| 2.384 | 3.0 | 21129 | 0.5455 | 0.1008 | 2.6195 | 0.0779 | 0.1429 |
| 2.5536 | 4.0 | 28172 | 0.5455 | 0.1008 | 2.3823 | 0.0779 | 0.1429 |
| 2.3549 | 5.0 | 35215 | 0.5455 | 0.1008 | 2.3964 | 0.0779 | 0.1429 |
| 2.3181 | 6.0 | 42258 | 0.5455 | 0.1008 | 2.4343 | 0.0779 | 0.1429 |
| 2.4398 | 7.0 | 49301 | 0.5455 | 0.1008 | 2.4609 | 0.0779 | 0.1429 |
| 2.3715 | 8.0 | 56344 | 0.5455 | 0.1008 | 2.4317 | 0.0779 | 0.1429 |
| 2.5554 | 9.0 | 63387 | 0.5455 | 0.1008 | 2.3966 | 0.0779 | 0.1429 |
| 1.3177 | 10.0 | 70430 | 0.5830 | 0.1707 | 1.4776 | 0.1472 | 0.2352 |
| 1.3928 | 11.0 | 77473 | 0.6159 | 0.1750 | 1.5114 | 0.1470 | 0.2348 |
| 1.5202 | 12.0 | 84516 | 0.6159 | 0.1746 | 1.4525 | 0.1465 | 0.2337 |
| 1.4013 | 13.0 | 91559 | 0.5909 | 0.1625 | 1.4524 | 0.1399 | 0.2113 |
| 1.4087 | 14.0 | 98602 | 0.5955 | 0.1736 | 1.4572 | 0.1484 | 0.2385 |
| 2.3755 | 15.0 | 105645 | 0.5727 | 0.1420 | 1.9328 | 0.1193 | 0.1771 |
| 2.2211 | 16.0 | 112688 | 0.5943 | 0.1596 | 1.7707 | 0.1317 | 0.2043 |
| 2.0359 | 17.0 | 119731 | 0.5830 | 0.1506 | 1.9399 | 0.1248 | 0.19 |
| 1.7553 | 18.0 | 126774 | 0.5920 | 0.1580 | 1.8171 | 0.1306 | 0.2026 |
| 1.4321 | 19.0 | 133817 | 0.6125 | 0.1733 | 1.4162 | 0.1462 | 0.2317 |
| 1.4545 | 20.0 | 140860 | 0.6068 | 0.1728 | 1.4446 | 0.1466 | 0.2324 |
| 1.3939 | 21.0 | 147903 | 0.6148 | 0.1747 | 1.4451 | 0.1473 | 0.2345 |
| 1.4333 | 22.0 | 154946 | 0.5841 | 0.1702 | 1.4462 | 0.1474 | 0.2333 |
| 1.3013 | 23.0 | 161989 | 0.6170 | 0.1757 | 1.4099 | 0.1480 | 0.2363 |
| 1.397 | 24.0 | 169032 | 0.6170 | 0.1766 | 1.4181 | 0.1489 | 0.2385 |
| 1.4752 | 25.0 | 176075 | 0.6136 | 0.1727 | 1.3997 | 0.1444 | 0.2297 |
| 1.372 | 26.0 | 183118 | 1.4134 | 0.6170 | 0.1748 | 0.1471 | 0.2340 |
| 1.4563 | 27.0 | 190161 | 1.3920 | 0.6205 | 0.1775 | 0.1492 | 0.2394 |
| 1.3727 | 28.0 | 197204 | 1.3763 | 0.6125 | 0.1737 | 0.1465 | 0.2328 |
| 1.4587 | 29.0 | 204247 | 1.3585 | 0.6170 | 0.1988 | 0.2058 | 0.2476 |
| 1.2723 | 30.0 | 211290 | 1.3586 | 0.6136 | 0.1973 | 0.1967 | 0.2455 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.13.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "base_model": "allenai/longformer-base-4096", "model-index": [{"name": "framing_classification_longformer_30_augmented_multi", "results": []}]} | AriyanH22/framing_classification_longformer_30_augmented_multi | null | [
"transformers",
"pytorch",
"longformer",
"text-classification",
"generated_from_trainer",
"base_model:allenai/longformer-base-4096",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:52:21+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | IN4/fast-whisper-v3-LoRA-8bit-epochs-3_num6_ru_kz | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:52:41+00:00 |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.00001_withdpo_4iters_bs256_531lr_iter_3
This model is a fine-tuned version of [ShenaoZ/0.00001_withdpo_4iters_bs256_531lr_iter_2](https://huggingface.co/ShenaoZ/0.00001_withdpo_4iters_bs256_531lr_iter_2) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZ/0.00001_withdpo_4iters_bs256_531lr_iter_2", "model-index": [{"name": "0.00001_withdpo_4iters_bs256_531lr_iter_3", "results": []}]} | ShenaoZ/0.00001_withdpo_4iters_bs256_531lr_iter_3 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZ/0.00001_withdpo_4iters_bs256_531lr_iter_2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T14:53:34+00:00 |
automatic-speech-recognition | transformers |
# Latvian Whisper small speech recognition model
Trained on combination of:
- Common Voice 17, custom selection of all validated clips, max 1000 clips per speaker
- Fleurs, test+train+validation
Both regular whisper model and CTranslate2 converted version for use with [faster-whisper](https://github.com/SYSTRAN/faster-whisper) as part of [Home Assistant Whisper integration](https://www.home-assistant.io/integrations/whisper/) are available.
To improve speech recognition quality, more data is needed, donate your voice on [Balsu talka](https://balsutalka.lv/) | {"language": ["lv"], "license": "apache-2.0", "tags": ["Whisper"], "metrics": [{"name": "wer", "type": "wer", "value": 9.56}], "pipeline_tag": "automatic-speech-recognition"} | RaivisDejus/whisper-small-lv | null | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"Whisper",
"lv",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"has_space"
] | null | 2024-05-01T14:54:03+00:00 |
null | null | {"license": "openrail"} | Frixi/TaylorSwift_TTPD2024 | null | [
"license:openrail",
"region:us"
] | null | 2024-05-01T14:54:28+00:00 |
|
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PLN_TS
This model is a fine-tuned version of [openai-community/gpt2-medium](https://huggingface.co/openai-community/gpt2-medium) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 10.7341
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 91 | 4.6308 |
| No log | 2.0 | 182 | 5.1050 |
| No log | 3.0 | 273 | 5.5102 |
| No log | 4.0 | 364 | 6.2532 |
| No log | 5.0 | 455 | 6.6069 |
| 1.1628 | 6.0 | 546 | 7.0238 |
| 1.1628 | 7.0 | 637 | 7.1553 |
| 1.1628 | 8.0 | 728 | 7.7253 |
| 1.1628 | 9.0 | 819 | 8.2397 |
| 1.1628 | 10.0 | 910 | 8.9225 |
| 0.1611 | 11.0 | 1001 | 9.3999 |
| 0.1611 | 12.0 | 1092 | 9.8062 |
| 0.1611 | 13.0 | 1183 | 10.1804 |
| 0.1611 | 14.0 | 1274 | 10.5743 |
| 0.1611 | 15.0 | 1365 | 10.7341 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "openai-community/gpt2-medium", "model-index": [{"name": "PLN_TS", "results": []}]} | joaohonorato/PLN_TS | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2-medium",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T14:54:30+00:00 |
unconditional-image-generation | diffusers |
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('blaackjack/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
| {"license": "mit", "tags": ["pytorch", "diffusers", "unconditional-image-generation", "diffusion-models-class"]} | blaackjack/sd-class-butterflies-32 | null | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2024-05-01T14:54:51+00:00 |
null | transformers |
# Uploaded model
- **Developed by:** CodeTriad
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-bnb-4bit"} | CodeTriad/mistral_base_7754_epoch3_dpo | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:55:20+00:00 |
null | null | {} | squaadinc/1714575686434x246517485039779840 | null | [
"region:us"
] | null | 2024-05-01T14:55:39+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# code-llama-7b-text-to-sql
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2 | {"license": "llama2", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "codellama/CodeLlama-7b-hf", "model-index": [{"name": "code-llama-7b-text-to-sql", "results": []}]} | Vivekg91/code-llama-7b-text-to-sql | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:codellama/CodeLlama-7b-hf",
"license:llama2",
"region:us"
] | null | 2024-05-01T14:55:50+00:00 |
token-classification | transformers | {} | pontusnorman123/layoutlmv3-finetuned-wild333_swetest_v3 | null | [
"transformers",
"tensorboard",
"safetensors",
"layoutlmv3",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:56:12+00:00 |
|
null | null | {} | squaadinc/1714575731255x199544820157644800 | null | [
"region:us"
] | null | 2024-05-01T14:56:18+00:00 |
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | Weblet/llama2-7b-hf-chat-lora-v3-turbo17145753578315382_cognitivecomputations-Code-290k-ShareGP | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T14:56:22+00:00 |
reinforcement-learning | ml-agents |
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: emmermarcell/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| {"library_name": "ml-agents", "tags": ["Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy"]} | emmermarcell/ppo-Huggy | null | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | null | 2024-05-01T14:56:31+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | cilantro9246/3w4w6gu | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T14:57:41+00:00 |
null | null | {} | blaackjack/sd-class-butterflies-64 | null | [
"region:us"
] | null | 2024-05-01T14:58:49+00:00 |
|
null | null | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Waktaverse-Llama-3-KO-8B-Instruct - GGUF
- Model creator: https://huggingface.co/PathFinderKR/
- Original model: https://huggingface.co/PathFinderKR/Waktaverse-Llama-3-KO-8B-Instruct/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Waktaverse-Llama-3-KO-8B-Instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q2_K.gguf) | Q2_K | 2.96GB |
| [Waktaverse-Llama-3-KO-8B-Instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [Waktaverse-Llama-3-KO-8B-Instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [Waktaverse-Llama-3-KO-8B-Instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [Waktaverse-Llama-3-KO-8B-Instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [Waktaverse-Llama-3-KO-8B-Instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q3_K.gguf) | Q3_K | 3.74GB |
| [Waktaverse-Llama-3-KO-8B-Instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [Waktaverse-Llama-3-KO-8B-Instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [Waktaverse-Llama-3-KO-8B-Instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [Waktaverse-Llama-3-KO-8B-Instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q4_0.gguf) | Q4_0 | 4.34GB |
| [Waktaverse-Llama-3-KO-8B-Instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [Waktaverse-Llama-3-KO-8B-Instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [Waktaverse-Llama-3-KO-8B-Instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q4_K.gguf) | Q4_K | 4.58GB |
| [Waktaverse-Llama-3-KO-8B-Instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [Waktaverse-Llama-3-KO-8B-Instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q4_1.gguf) | Q4_1 | 4.78GB |
| [Waktaverse-Llama-3-KO-8B-Instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q5_0.gguf) | Q5_0 | 5.21GB |
| [Waktaverse-Llama-3-KO-8B-Instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [Waktaverse-Llama-3-KO-8B-Instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q5_K.gguf) | Q5_K | 5.34GB |
| [Waktaverse-Llama-3-KO-8B-Instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [Waktaverse-Llama-3-KO-8B-Instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q5_1.gguf) | Q5_1 | 5.65GB |
| [Waktaverse-Llama-3-KO-8B-Instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q6_K.gguf) | Q6_K | 6.14GB |
Original model description:
---
language:
- ko
- en
license: llama3
library_name: transformers
datasets:
- MarkrAI/KoCommercial-Dataset
---
# Waktaverse-Llama-3-KO-8B-Instruct Model Card
## Model Details

Waktaverse-Llama-3-KO-8B-Instruct is a state-of-the-art Korean language model developed by Waktaverse AI team.
This large language model is a specialized version of the Meta-Llama-3-8B-Instruct, tailored for Korean natural language processing tasks.
It is designed to handle a variety of complex instructions and generate coherent, contextually appropriate responses.
- **Developed by:** Waktaverse AI
- **Model type:** Large Language Model
- **Language(s) (NLP):** Korean, English
- **License:** [Llama3](https://llama.meta.com/llama3/license)
- **Finetuned from model:** [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
## Model Sources
- **Repository:** [GitHub](https://github.com/PathFinderKR/Waktaverse-LLM/tree/main)
- **Paper :** [More Information Needed]
## Uses
### Direct Use
The model can be utilized directly for tasks such as text completion, summarization, and question answering without any fine-tuning.
### Out-of-Scope Use
This model is not intended for use in scenarios that involve high-stakes decision-making including medical, legal, or safety-critical areas due to the potential risks of relying on automated decision-making.
Moreover, any attempt to deploy the model in a manner that infringes upon privacy rights or facilitates biased decision-making is strongly discouraged.
## Bias, Risks, and Limitations
While Waktaverse Llama 3 is a robust model, it shares common limitations associated with machine learning models including potential biases in training data, vulnerability to adversarial attacks, and unpredictable behavior under edge cases.
There is also a risk of cultural and contextual misunderstanding, particularly when the model is applied to languages and contexts it was not specifically trained on.
## How to Get Started with the Model
You can run conversational inference using the Transformers Auto classes.
We highly recommend that you add Korean system prompt for better output.
Adjust the hyperparameters as you need.
### Example Usage
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = (
"cuda:0" if torch.cuda.is_available() else # Nvidia GPU
"mps" if torch.backends.mps.is_available() else # Apple Silicon GPU
"cpu"
)
model_id = "PathFinderKR/Waktaverse-Llama-3-KO-8B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device=device,
)
################################################################################
# Generation parameters
################################################################################
num_return_sequences=1
max_new_tokens=1024
temperature=0.9
top_k=40
top_p=0.9
repetition_penalty=1.1
def generate_response(system ,user):
messages = [
{"role": "system", "content": system},
{"role": "user", "content": user}
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=False
)
input_ids = tokenizer.encode(
prompt,
add_special_tokens=False,
return_tensors="pt"
).to(device)
outputs = model.generate(
input_ids=input_ids,
pad_token_id=tokenizer.eos_token_id,
num_return_sequences=num_return_sequences,
max_new_tokens=max_new_tokens,
do_sample=True,
temperature=temperature,
top_k=top_k,
top_p=top_p,
repetition_penalty=repetition_penalty
)
return tokenizer.decode(outputs[0], skip_special_tokens=False)
system_prompt = "다음 지시사항에 대한 응답을 작성해주세요."
user_prompt = "피보나치 수열에 대해 설명해주세요."
response = generate_response(system_prompt, user_prompt)
print(response)
```
### Example Output
```python
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
다음 지시사항에 대한 응답을 작성해주세요.<|eot_id|><|start_header_id|>user<|end_header_id|>
피보나치 수열에 대해 설명해주세요.<|eot_id|><|start_header_id|>assistant<|end_header_id|>
피보나치 수열은 수학에서 가장 유명한 수열 중 하나로, 0과 1로 시작하는 숫자들의 모임입니다. 각 숫자는 이전 두 개의 숫자의 합으로 정의되며, 이렇게 계속 반복됩니다. 피보나치 수열은 무한히 커지는데, 첫 번째와 두 번째 항이 모두 0일 수도 있지만 일반적으로는 첫 번째 항이 1이고 두 번째 항이 1입니다.
예를 들어, 0 + 1 = 1, 1 + 1 = 2, 2 + 1 = 3, 3 + 2 = 5, 5 + 3 = 8, 8 + 5 = 13, 13 + 8 = 21, 21 + 13 = 34 등이 있습니다. 이 숫자들을 피보나치 수열이라고 합니다.
피보나치 수열은 다른 수열들과 함께 사용될 때 도움이 됩니다. 예를 들어, 금융 시장에서는 금리 수익률을 나타내기 위해 이 수열이 사용됩니다. 또한 컴퓨터 과학과 컴퓨터 과학에서도 종종 찾을 수 있습니다. 피보나치 수열은 매우 복잡하며 많은 숫자가 나오므로 일반적인 수열처럼 쉽게 구할 수 없습니다. 이 때문에 피보나치 수열은 대수적 함수와 관련이 있으며 수학자들은 이를 연구하고 계산하기 위해 다양한 알고리즘을 개발했습니다.
참고 자료: https://en.wikipedia.org/wiki/Fibonacci_sequence#Properties.<|eot_id|>
```
## Training Details
### Training Data
The model is trained on the [MarkrAI/KoCommercial-Dataset](https://huggingface.co/datasets/MarkrAI/KoCommercial-Dataset), which consists of various commercial texts in Korean.
### Training Procedure
The model training used LoRA for computational efficiency. 0.02 billion parameters(0.26% of total parameters) were trained.
#### Training Hyperparameters
```python
################################################################################
# bitsandbytes parameters
################################################################################
load_in_4bit=True
bnb_4bit_compute_dtype=torch_dtype
bnb_4bit_quant_type="nf4"
bnb_4bit_use_double_quant=False
################################################################################
# LoRA parameters
################################################################################
task_type="CAUSAL_LM"
target_modules=["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"]
r=8
lora_alpha=16
lora_dropout=0.05
bias="none"
################################################################################
# TrainingArguments parameters
################################################################################
num_train_epochs=1
per_device_train_batch_size=1
per_device_eval_batch_size=2
gradient_accumulation_steps=4
gradient_checkpointing=True
learning_rate=2e-5
lr_scheduler_type="cosine"
warmup_ratio=0.1
weight_decay=0.1
################################################################################
# SFT parameters
################################################################################
max_seq_length=1024
packing=True
```
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Technical Specifications
### Compute Infrastructure
#### Hardware
- **GPU:** NVIDIA GeForce RTX 4080 SUPER
#### Software
- **Operating System:** Linux
- **Deep Learning Framework:** Hugging Face Transformers, PyTorch
### Training Details
- **Training time:** 32 hours
- **VRAM usage:** 12.8 GB
- **GPU power usage:** 300 W
## Citation
**Waktaverse-Llama-3**
```
TBD
```
**Llama-3**
```
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
```
## Model Card Authors
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {} | RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf | null | [
"gguf",
"region:us"
] | null | 2024-05-01T14:59:03+00:00 |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Boreas-7B - bnb 4bits
- Model creator: https://huggingface.co/yhavinga/
- Original model: https://huggingface.co/yhavinga/Boreas-7B/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Boreas-7B
Base model of [Boreas-7B-chat](https://huggingface.co/yhavinga/Boreas-7B-chat)
For more info refer to the readme of the chat model.
| {} | RichardErkhov/yhavinga_-_Boreas-7B-4bits | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-05-01T14:59:28+00:00 |
text-to-image | diffusers | {} | GraydientPlatformAPI/pony-diffusion-v6-sdxl-inpainting | null | [
"diffusers",
"safetensors",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | null | 2024-05-01T14:59:40+00:00 |
|
question-answering | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# cancerfarore/bert-base-uncased-CancerFarore-Model
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0652
- Train End Logits Accuracy: 0.9800
- Train Start Logits Accuracy: 0.9786
- Validation Loss: 2.5446
- Validation End Logits Accuracy: 0.6075
- Validation Start Logits Accuracy: 0.6041
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18960, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.8107 | 0.4921 | 0.4706 | 1.4353 | 0.5224 | 0.5220 | 0 |
| 1.0870 | 0.6675 | 0.6432 | 1.2412 | 0.6071 | 0.6127 | 1 |
| 0.7170 | 0.7809 | 0.7596 | 1.3592 | 0.6071 | 0.5950 | 2 |
| 0.4657 | 0.8583 | 0.8418 | 1.4376 | 0.6266 | 0.6187 | 3 |
| 0.3015 | 0.9095 | 0.8967 | 1.7133 | 0.6289 | 0.6233 | 4 |
| 0.2080 | 0.9388 | 0.9279 | 2.0004 | 0.6127 | 0.5999 | 5 |
| 0.1521 | 0.9534 | 0.9488 | 2.0970 | 0.6157 | 0.6067 | 6 |
| 0.1054 | 0.9666 | 0.9650 | 2.3507 | 0.6187 | 0.6120 | 7 |
| 0.0850 | 0.9741 | 0.9728 | 2.5902 | 0.5977 | 0.5977 | 8 |
| 0.0652 | 0.9800 | 0.9786 | 2.5446 | 0.6075 | 0.6041 | 9 |
### Framework versions
- Transformers 4.40.1
- TensorFlow 2.15.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "google-bert/bert-base-uncased", "model-index": [{"name": "cancerfarore/bert-base-uncased-CancerFarore-Model", "results": []}]} | cancerfarore/bert-base-uncased-CancerFarore-Model | null | [
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T15:01:01+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Zangs3011/gemma2b_finetuned_awq | null | [
"transformers",
"safetensors",
"opt",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-05-01T15:01:59+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | barsbold/diplom-tokenizer | null | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T15:02:20+00:00 |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | MohammadKarami/whole-roBERTa | null | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T15:02:45+00:00 |
text-classification | transformers | {} | GalMargo/YadaYadaBERT | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T15:02:57+00:00 |
|
text-generation | transformers | {} | noeloco/camel-lora-merged | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T15:03:05+00:00 |
|
text-to-image | diffusers |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA DreamBooth - mrtuandao/dreambooth-LoRA-tuan-without-prior
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of SKS person using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | {"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["text-to-image", "diffusers", "lora", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers"], "base_model": "runwayml/stable-diffusion-v1-5", "inference": true, "instance_prompt": "a photo of SKS person"} | mrtuandao/dreambooth-LoRA-tuan-without-prior | null | [
"diffusers",
"text-to-image",
"lora",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-05-01T15:03:26+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | abc88767/model31 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T15:03:45+00:00 |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | profoz/parent_malicious_model_onnx | null | [
"transformers",
"onnx",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T15:03:53+00:00 |
null | null | {} | Ricsvagyok/kfc-alatt | null | [
"region:us"
] | null | 2024-05-01T15:03:56+00:00 |
|
null | null | {} | pahri/setfit-indo-resto-RM-ibu-imas | null | [
"region:us"
] | null | 2024-05-01T15:04:35+00:00 |
|
text-generation | transformers | {} | Tristan/pythia-70m-default | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T15:04:43+00:00 |
|
null | null | Please use the ChatML prompt template, or download the following config file for LMStudio [HERE](https://huggingface.co/qnguyen3/14b-gguf/blob/main/chatml_viet.json) | {} | qnguyen3/14b-gguf-arxiv | null | [
"gguf",
"region:us"
] | null | 2024-05-01T15:05:25+00:00 |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Boreas-7B - bnb 8bits
- Model creator: https://huggingface.co/yhavinga/
- Original model: https://huggingface.co/yhavinga/Boreas-7B/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Boreas-7B
Base model of [Boreas-7B-chat](https://huggingface.co/yhavinga/Boreas-7B-chat)
For more info refer to the readme of the chat model.
| {} | RichardErkhov/yhavinga_-_Boreas-7B-8bits | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-05-01T15:05:45+00:00 |
automatic-speech-recognition | transformers | {} | bjak/cv_16_th | null | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T15:06:30+00:00 |
|
null | null | {} | squaadinc/nlp-model | null | [
"region:us"
] | null | 2024-05-01T15:06:39+00:00 |
|
null | null | {} | sai-vatturi/whisper-small-hi | null | [
"region:us"
] | null | 2024-05-01T15:08:22+00:00 |
|
null | null | <p align="center">
<h1>DonuRVCFork 💖</h1>
</p>
🎉 Добро пожаловать!! 🎉
Этот проект использует различные библиотеки и модули для создания графического интерфейса пользователя (GUI) для преобразования голоса.
На данный момент это самый быстрый и простой способ познакомиться с RVC!
В основном предназначен для локальных пользователей. 🖥️
## 🖥️ Использование 🖥️
Как только проект будет завершен и доступен для установки, здесь будут представлены подробные инструкции по использованию приложения.
Они будут включать в себя шаги по настройке приложения, запуску приложения и использованию различных функций приложения. 🌐
## 🌟 Особенности 🌟
RVC предлагает ряд функций, включая:
- 🎙️ **Преобразование аудио в желаемую модель голоса**:
С помощью RVC вы можете преобразовать любой звук, используя ту модель голоса, которую вы предпочитаете. Это все равно что иметь под рукой личного художника по озвучиванию.
- ⚡ **Быстрый вывод и обучение**:
Благодаря оптимизации кода и использованию передового оборудования RVC сможет выполнять вывод и обучение модели за рекордно короткое время.
Это сэкономит ваше драгоценное время и позволит сосредоточиться на том, что действительно важно.
- 💾 **Скачать голосовую модель прямо из интерфейса**:
Вы можете напрямую загружать модели, не используя другой интерфейс, насколько это удобно?
- 🔄 **Автоматический импорт моделей**:
Больше не нужно загружать модели вручную. Благодаря автоматическому импорту RVC сможет обнаружить и импортировать ваши модели, как только они станут доступны.
- 🚀 **Дополнительные и передовые возможности конвертации**:
RVC предлагает варианты конвертации, которые находятся на переднем крае искусственного интеллекта. Вы можете адаптировать свой опыт к вашим конкретным потребностям.
- 🧠 **Обучение пользовательским моделям**:
С помощью RVC вы можете обучать свои собственные модели голоса. Это даст вам еще больше контроля над качеством и характеристиками генерируемого голоса.
- 🛠️ **Постоянно обновляется.**:
RVC - это продукт, находящийся в постоянном развитии. Команда инженеров постоянно работает над улучшением и обновлением системы.
- 🗣️ **Выбор из 3 различных моделей TTS, включая Edge TTS**:
С RVC вы избалованы выбором. Вы можете выбрать одну из трех различных моделей синтеза голоса, включая Edge TTS.
- ✔️ **Простота использования для неопытных пользователей**:
Не волнуйтесь, если вы не разбираетесь в технологиях. RVC разработан так, чтобы им могли легко пользоваться все, независимо от уровня их опыта.
| {} | NeuroDonu/donuvc | null | [
"region:us"
] | null | 2024-05-01T15:11:08+00:00 |
null | null | {} | tyzhu/lmind_nq_train6000_eval6489_v1_doc_qa_meta-llama_Llama-2-7b-hf-lmfactory | null | [
"region:us"
] | null | 2024-05-01T15:11:26+00:00 |
|
text-to-image | diffusers |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - kirillgoltsman/dreambooth
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.




DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | {"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers"], "inference": true, "base_model": "runwayml/stable-diffusion-v1-5", "instance_prompt": "a photo of sks dog"} | kirillgoltsman/dreambooth | null | [
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null | 2024-05-01T15:11:37+00:00 |
null | null | {} | Ahmedym/Computer_Graphics_Project | null | [
"region:us"
] | null | 2024-05-01T15:11:38+00:00 |
|
text-generation | transformers |
# Uploaded model
- **Developed by:** herisan
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | herisan/llama-3-8b_mental_health_counseling_conversations | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T15:11:44+00:00 |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeT5-small-without-lora-with-prompt
This model is a fine-tuned version of [Salesforce/codet5-small](https://huggingface.co/Salesforce/codet5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8393
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.1708 | 1.0 | 4383 | 0.9260 |
| 1.0645 | 2.0 | 8766 | 0.8791 |
| 1.0192 | 3.0 | 13149 | 0.8537 |
| 1.0103 | 4.0 | 17532 | 0.8397 |
| 0.9855 | 5.0 | 21915 | 0.8393 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "Salesforce/codet5-small", "model-index": [{"name": "codeT5-small-without-lora-with-prompt", "results": []}]} | EEsu/codeT5-adam-trial | null | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:Salesforce/codet5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T15:12:07+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | saransh03sharma/mintrec2-llama-2-7b-50 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T15:13:33+00:00 |
text2text-generation | transformers | {} | SimSaas/txt2KG_T5_large_S1 | null | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T15:13:39+00:00 |
|
null | null | {} | Clybius/XLOmni-Models | null | [
"region:us"
] | null | 2024-05-01T15:15:00+00:00 |
|
null | null | {} | sai-vatturi/whisper-base-en-indian-accent | null | [
"region:us"
] | null | 2024-05-01T15:16:18+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gen-z-translate-llama-3-instruct
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "other", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "model-index": [{"name": "gen-z-translate-llama-3-instruct", "results": []}]} | llm-wizard/gen-z-translate-llama-3-instruct | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"region:us"
] | null | 2024-05-01T15:16:22+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["unsloth"]} | awad9201/llama3-alpaca-dataset | null | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T15:16:30+00:00 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.