modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-27 18:27:39
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 500
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-27 18:23:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mlx-community/granite-20b-code-base-8bit | mlx-community | 2024-05-14T17:17:01Z | 11 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_bigcode",
"text-generation",
"code",
"granite",
"mlx",
"dataset:codeparrot/github-code-clean",
"dataset:bigcode/starcoderdata",
"dataset:open-web-math/open-web-math",
"dataset:math-ai/StackMathQA",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-06T21:39:50Z | ---
license: apache-2.0
library_name: transformers
tags:
- code
- granite
- mlx
datasets:
- codeparrot/github-code-clean
- bigcode/starcoderdata
- open-web-math/open-web-math
- math-ai/StackMathQA
metrics:
- code_eval
pipeline_tag: text-generation
inference: true
model-index:
- name: granite-20b-code-base
results:
- task:
type: text-generation
dataset:
name: MBPP
type: mbpp
metrics:
- type: pass@1
value: 43.8
name: pass@1
- task:
type: text-generation
dataset:
name: MBPP+
type: evalplus/mbppplus
metrics:
- type: pass@1
value: 51.6
name: pass@1
- task:
type: text-generation
dataset:
name: HumanEvalSynthesis(Python)
type: bigcode/humanevalpack
metrics:
- type: pass@1
value: 48.2
name: pass@1
- type: pass@1
value: 50.0
name: pass@1
- type: pass@1
value: 59.1
name: pass@1
- type: pass@1
value: 32.3
name: pass@1
- type: pass@1
value: 40.9
name: pass@1
- type: pass@1
value: 35.4
name: pass@1
- type: pass@1
value: 17.1
name: pass@1
- type: pass@1
value: 18.3
name: pass@1
- type: pass@1
value: 23.2
name: pass@1
- type: pass@1
value: 10.4
name: pass@1
- type: pass@1
value: 25.6
name: pass@1
- type: pass@1
value: 18.3
name: pass@1
- type: pass@1
value: 23.2
name: pass@1
- type: pass@1
value: 23.8
name: pass@1
- type: pass@1
value: 14.6
name: pass@1
- type: pass@1
value: 26.2
name: pass@1
- type: pass@1
value: 15.2
name: pass@1
- type: pass@1
value: 3.0
name: pass@1
---
# mlx-community/granite-20b-code-base-8bit
The Model [mlx-community/granite-20b-code-base-8bit](https://huggingface.co/mlx-community/granite-20b-code-base-8bit) was converted to MLX format from [ibm-granite/granite-20b-code-base](https://huggingface.co/ibm-granite/granite-20b-code-base) using mlx-lm version **0.13.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/granite-20b-code-base-8bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
sgarrett/test | sgarrett | 2024-05-14T17:16:20Z | 146 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:nferruz/ProtGPT2",
"base_model:finetune:nferruz/ProtGPT2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T17:04:45Z | ---
license: apache-2.0
base_model: nferruz/ProtGPT2
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [nferruz/ProtGPT2](https://huggingface.co/nferruz/ProtGPT2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 17.4453
- Accuracy: 0.0333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
bryanlimy/ViV1T | bryanlimy | 2024-05-14T17:15:52Z | 0 | 0 | null | [
"neuroai",
"neuro-ai",
"visual-response-prediction",
"en",
"license:mit",
"region:us"
] | null | 2024-05-06T18:46:00Z | ---
license: mit
language:
- en
tags:
- neuroai
- neuro-ai
- visual-response-prediction
---
# ViV1T model checkpoint
Model checkpoints used in the ViV1T (team `dunedin`) submission to the [NeurIPS Sensorium 2023 challenge](https://www.sensorium-competition.net/) which came 🥉 place overall.
The checkpoint and training log from 5 ViV1T models, each trained with a different seed, are available.
Please check [github.com/bryanlimy/ViV1T](https://github.com/bryanlimy/ViV1T) for more information and example code. |
CowCowC/Adu_mod_id_img | CowCowC | 2024-05-14T17:11:20Z | 192 | 0 | transformers | [
"transformers",
"onnx",
"safetensors",
"vit",
"image-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-14T16:35:49Z | ---
model-index:
- name: adult-content-classifier-image
results: []
pipeline_tag: image-classification
---
# adult-content-identify-image
(text version [here](https://huggingface.co/jiechau/adult-content-identify-text) 文字版本請參考 [這裡](https://huggingface.co/jiechau/adult-content-identify-text))
Determine whether online sales products are adult content. Input: image content, Output results: 0 Unknown, 1 Adult Content, 2 General Merchandise.
判斷網路銷售商品是否屬於成人內容。輸入圖片內容,輸出結果: 0 未知, 1 成人內容, 2 一般商品。
# use transformers pipeline
```python
from transformers import pipeline, AutoConfig
pipe = pipeline("image-classification", model="jiechau/adult-content-identify-image")
config = AutoConfig.from_pretrained("jiechau/adult-content-identify-image")
label2id = config.label2id
id2label = config.id2label
q = 'https://xxx.xxx.xxx/images/xxx/xxx.webp'
q = 'https://xxx.xxx.xxx/images/xxx/xxx.jpg'
result = pipe(q)
print(result)
print(label2id[result[0]['label']])
# [{'label': 'adult_成人商品', 'score': 0.7516837120056152}, {'label': 'regular_一般商品', 'score': 0.2475457787513733}, {'label': 'unknown', 'score': 0.0007705678581260145}]
# 1
``` |
VanCan23/SFTDPO_1epoch_adapter | VanCan23 | 2024-05-14T17:08:17Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2024-05-09T17:09:35Z | ---
license: apache-2.0
---
|
SakuraLLM/Sakura-32B-Qwen2beta-v0.9-GGUF | SakuraLLM | 2024-05-14T17:06:32Z | 304 | 8 | null | [
"gguf",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-08T12:58:05Z | ---
license: cc-by-nc-sa-4.0
---
|
PrawitK/llama3_8b_han_16bit | PrawitK | 2024-05-14T17:04:25Z | 0 | 0 | transformers | [
"transformers",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-14T17:04:24Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
PrawitK/llama3_8b_han_1 | PrawitK | 2024-05-14T17:04:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-14T17:04:12Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** PrawitK
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
fine-tuned/dutch-legal-c-64-24 | fine-tuned | 2024-05-14T16:54:08Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"Law",
"Legislation",
"Netherlands",
"Policy",
"Support",
"custom_code",
"en",
"dataset:fine-tuned/dutch-legal-c-64-24",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-14T16:53:53Z | ---
license: apache-2.0
datasets:
- fine-tuned/dutch-legal-c-64-24
- allenai/c4
language:
- en
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- Law
- Legislation
- Netherlands
- Policy
- Support
---
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case:
legal document search for Dutch legislation
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/dutch-legal-c-64-24',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
keyoae/MBBkeyo | keyoae | 2024-05-14T16:53:11Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-14T16:48:56Z | ---
license: apache-2.0
---
|
erwinyonata/distilbert-base-uncased-lora-text-classification | erwinyonata | 2024-05-14T16:53:03Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:adapter:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2024-05-14T16:39:48Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: distilbert-base-uncased
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-lora-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-lora-text-classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7392
- Accuracy: {'accuracy': 0.901}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------------:|
| No log | 1.0 | 125 | 0.2589 | {'accuracy': 0.896} |
| No log | 2.0 | 250 | 0.4331 | {'accuracy': 0.868} |
| No log | 3.0 | 375 | 0.3884 | {'accuracy': 0.901} |
| 0.2587 | 4.0 | 500 | 0.4673 | {'accuracy': 0.895} |
| 0.2587 | 5.0 | 625 | 0.6184 | {'accuracy': 0.899} |
| 0.2587 | 6.0 | 750 | 0.6478 | {'accuracy': 0.902} |
| 0.2587 | 7.0 | 875 | 0.7249 | {'accuracy': 0.899} |
| 0.0338 | 8.0 | 1000 | 0.7446 | {'accuracy': 0.893} |
| 0.0338 | 9.0 | 1125 | 0.7290 | {'accuracy': 0.9} |
| 0.0338 | 10.0 | 1250 | 0.7392 | {'accuracy': 0.901} |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
HariprasathSB/whisper-peft2 | HariprasathSB | 2024-05-14T16:51:41Z | 1 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:vasista22/whisper-tamil-medium",
"base_model:adapter:vasista22/whisper-tamil-medium",
"region:us"
] | null | 2024-05-13T20:23:43Z | ---
library_name: peft
base_model: vasista22/whisper-tamil-medium
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.1.dev0 |
Mag0g/Ezekiel27_5 | Mag0g | 2024-05-14T16:51:35Z | 130 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T16:50:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Pavithira9112/rare-puppers | Pavithira9112 | 2024-05-14T16:48:44Z | 197 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-14T16:48:39Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9464285969734192
---
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### apple

#### mango

#### papaya

#### pineapple

#### watermelon
 |
tsavage68/Transaminitis_L3_1000steps_1e8rate_01beta_CSFTDPO | tsavage68 | 2024-05-14T16:48:35Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:tsavage68/Transaminitis_L3_1000rate_1e7_SFT",
"base_model:finetune:tsavage68/Transaminitis_L3_1000rate_1e7_SFT",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T16:32:13Z | ---
license: llama3
base_model: tsavage68/Transaminitis_L3_1000rate_1e7_SFT
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: Transaminitis_L3_1000steps_1e8rate_01beta_DPO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Transaminitis_L3_1000steps_1e8rate_01beta_DPO
This model is a fine-tuned version of [tsavage68/Transaminitis_L3_1000rate_1e7_SFT](https://huggingface.co/tsavage68/Transaminitis_L3_1000rate_1e7_SFT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6939
- Rewards/chosen: 0.0011
- Rewards/rejected: 0.0026
- Rewards/accuracies: 0.4100
- Rewards/margins: -0.0014
- Logps/rejected: -18.5291
- Logps/chosen: -18.5229
- Logits/rejected: -1.0656
- Logits/chosen: -1.0644
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-08
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6937 | 0.2 | 25 | 0.6931 | 0.0001 | 0.0001 | 0.0100 | 0.0000 | -18.5542 | -18.5333 | -1.0657 | -1.0646 |
| 0.6937 | 0.4 | 50 | 0.6931 | 0.0014 | 0.0012 | 0.5400 | 0.0002 | -18.5426 | -18.5205 | -1.0657 | -1.0645 |
| 0.6937 | 0.6 | 75 | 0.6938 | 0.0004 | 0.0017 | 0.4600 | -0.0013 | -18.5374 | -18.5302 | -1.0653 | -1.0643 |
| 0.6941 | 0.8 | 100 | 0.6929 | 0.0003 | -0.0003 | 0.5 | 0.0006 | -18.5573 | -18.5312 | -1.0667 | -1.0656 |
| 0.6922 | 1.0 | 125 | 0.6934 | 0.0022 | 0.0026 | 0.4800 | -0.0004 | -18.5288 | -18.5123 | -1.0666 | -1.0654 |
| 0.6945 | 1.2 | 150 | 0.6937 | 0.0009 | 0.0020 | 0.4500 | -0.0011 | -18.5347 | -18.5251 | -1.0648 | -1.0637 |
| 0.6934 | 1.4 | 175 | 0.6927 | 0.0058 | 0.0049 | 0.5600 | 0.0010 | -18.5061 | -18.4759 | -1.0650 | -1.0639 |
| 0.6934 | 1.6 | 200 | 0.6937 | 0.0009 | 0.0021 | 0.4200 | -0.0011 | -18.5342 | -18.5251 | -1.0652 | -1.0640 |
| 0.6953 | 1.8 | 225 | 0.6935 | -0.0007 | -0.0002 | 0.4700 | -0.0006 | -18.5563 | -18.5415 | -1.0650 | -1.0638 |
| 0.6906 | 2.0 | 250 | 0.6935 | 0.0008 | 0.0014 | 0.4900 | -0.0006 | -18.5411 | -18.5264 | -1.0657 | -1.0645 |
| 0.693 | 2.2 | 275 | 0.6935 | 0.0028 | 0.0035 | 0.5100 | -0.0007 | -18.5196 | -18.5059 | -1.0662 | -1.0650 |
| 0.6945 | 2.4 | 300 | 0.6934 | 0.0013 | 0.0018 | 0.5300 | -0.0005 | -18.5368 | -18.5211 | -1.0658 | -1.0646 |
| 0.6934 | 2.6 | 325 | 0.6933 | 0.0002 | 0.0005 | 0.5 | -0.0002 | -18.5500 | -18.5320 | -1.0657 | -1.0646 |
| 0.6914 | 2.8 | 350 | 0.6933 | -0.0038 | -0.0036 | 0.4900 | -0.0003 | -18.5903 | -18.5727 | -1.0655 | -1.0643 |
| 0.6914 | 3.0 | 375 | 0.6935 | 0.0004 | 0.0011 | 0.4900 | -0.0007 | -18.5435 | -18.5301 | -1.0665 | -1.0654 |
| 0.6914 | 3.2 | 400 | 0.6927 | 0.0048 | 0.0038 | 0.4900 | 0.0009 | -18.5165 | -18.4865 | -1.0655 | -1.0643 |
| 0.6949 | 3.4 | 425 | 0.6933 | 0.0020 | 0.0023 | 0.4900 | -0.0003 | -18.5321 | -18.5146 | -1.0660 | -1.0649 |
| 0.6922 | 3.6 | 450 | 0.6937 | -0.0020 | -0.0009 | 0.5 | -0.0011 | -18.5634 | -18.5540 | -1.0653 | -1.0642 |
| 0.6926 | 3.8 | 475 | 0.6927 | 0.0040 | 0.0030 | 0.4800 | 0.0010 | -18.5242 | -18.4937 | -1.0656 | -1.0645 |
| 0.693 | 4.0 | 500 | 0.6942 | 0.0022 | 0.0042 | 0.4400 | -0.0020 | -18.5124 | -18.5118 | -1.0658 | -1.0646 |
| 0.693 | 4.2 | 525 | 0.6932 | 0.0030 | 0.0031 | 0.4500 | -0.0000 | -18.5239 | -18.5038 | -1.0662 | -1.0649 |
| 0.6922 | 4.4 | 550 | 0.6936 | 0.0028 | 0.0036 | 0.5100 | -0.0009 | -18.5182 | -18.5066 | -1.0651 | -1.0640 |
| 0.6934 | 4.6 | 575 | 0.6938 | 0.0014 | 0.0027 | 0.4800 | -0.0013 | -18.5278 | -18.5202 | -1.0656 | -1.0645 |
| 0.6937 | 4.8 | 600 | 0.6941 | 0.0023 | 0.0041 | 0.4500 | -0.0019 | -18.5132 | -18.5113 | -1.0653 | -1.0642 |
| 0.691 | 5.0 | 625 | 0.6936 | 0.0024 | 0.0033 | 0.5100 | -0.0009 | -18.5219 | -18.5103 | -1.0654 | -1.0642 |
| 0.6926 | 5.2 | 650 | 0.6942 | 0.0006 | 0.0027 | 0.4100 | -0.0021 | -18.5279 | -18.5280 | -1.0655 | -1.0643 |
| 0.6953 | 5.4 | 675 | 0.6938 | 0.0027 | 0.0040 | 0.4400 | -0.0013 | -18.5149 | -18.5071 | -1.0656 | -1.0645 |
| 0.6937 | 5.6 | 700 | 0.6930 | 0.0042 | 0.0038 | 0.5 | 0.0004 | -18.5169 | -18.4921 | -1.0657 | -1.0645 |
| 0.693 | 5.8 | 725 | 0.6935 | 0.0022 | 0.0027 | 0.4600 | -0.0006 | -18.5272 | -18.5127 | -1.0656 | -1.0644 |
| 0.6937 | 6.0 | 750 | 0.6935 | 0.0014 | 0.0022 | 0.4400 | -0.0008 | -18.5327 | -18.5198 | -1.0656 | -1.0645 |
| 0.6918 | 6.2 | 775 | 0.6936 | 0.0017 | 0.0024 | 0.4300 | -0.0008 | -18.5303 | -18.5175 | -1.0655 | -1.0644 |
| 0.6934 | 6.4 | 800 | 0.6938 | 0.0008 | 0.0021 | 0.4200 | -0.0013 | -18.5333 | -18.5261 | -1.0655 | -1.0644 |
| 0.6902 | 6.6 | 825 | 0.6939 | 0.0011 | 0.0026 | 0.4100 | -0.0014 | -18.5291 | -18.5229 | -1.0656 | -1.0644 |
| 0.6937 | 6.8 | 850 | 0.6939 | 0.0011 | 0.0026 | 0.4100 | -0.0014 | -18.5291 | -18.5229 | -1.0656 | -1.0644 |
| 0.6949 | 7.0 | 875 | 0.6939 | 0.0011 | 0.0026 | 0.4100 | -0.0014 | -18.5291 | -18.5229 | -1.0656 | -1.0644 |
| 0.693 | 7.2 | 900 | 0.6939 | 0.0011 | 0.0026 | 0.4100 | -0.0014 | -18.5291 | -18.5229 | -1.0656 | -1.0644 |
| 0.6941 | 7.4 | 925 | 0.6939 | 0.0011 | 0.0026 | 0.4100 | -0.0014 | -18.5291 | -18.5229 | -1.0656 | -1.0644 |
| 0.6937 | 7.6 | 950 | 0.6939 | 0.0011 | 0.0026 | 0.4100 | -0.0014 | -18.5291 | -18.5229 | -1.0656 | -1.0644 |
| 0.6926 | 7.8 | 975 | 0.6939 | 0.0011 | 0.0026 | 0.4100 | -0.0014 | -18.5291 | -18.5229 | -1.0656 | -1.0644 |
| 0.6918 | 8.0 | 1000 | 0.6939 | 0.0011 | 0.0026 | 0.4100 | -0.0014 | -18.5291 | -18.5229 | -1.0656 | -1.0644 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.0.0+cu117
- Datasets 2.19.1
- Tokenizers 0.19.1
|
saransh03sharma/mintrec2-mistral-2-7b-50-1 | saransh03sharma | 2024-05-14T16:47:07Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T16:41:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ferrazzipietro/LS_Mistral-7B-v0.1_adapters_en.layer1_NoQuant_16_32_0.01_8_0.0002 | ferrazzipietro | 2024-05-14T16:45:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T13:12:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MaziyarPanahi/Goku-8x22B-v0.1 | MaziyarPanahi | 2024-05-14T16:45:35Z | 30 | 8 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"sharegpt",
"axolotl",
"conversational",
"fr",
"it",
"de",
"es",
"en",
"dataset:philschmid/guanaco-sharegpt-style",
"base_model:v2ray/Mixtral-8x22B-v0.1",
"base_model:finetune:v2ray/Mixtral-8x22B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-12T10:48:25Z | ---
license: apache-2.0
language:
- fr
- it
- de
- es
- en
tags:
- moe
- mixtral
- sharegpt
- axolotl
library_name: transformers
base_model: v2ray/Mixtral-8x22B-v0.1
inference: false
model_creator: MaziyarPanahi
model_name: Goku-8x22B-v0.1
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
datasets:
- philschmid/guanaco-sharegpt-style
---
<img src="./Goku-8x22b-v0.1.webp" alt="Goku 8x22B v0.1 Logo" width="500" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Goku-8x22B-v0.1 (Goku 141b-A35b)
A fine-tuned version of [v2ray/Mixtral-8x22B-v0.1](https://huggingface.co/v2ray/Mixtral-8x22B-v0.1) model on the `philschmid/guanaco-sharegpt-style` dataset. This model has a total of 141b parameters with 35b only active.
## How to use it
**Use a pipeline as a high-level helper:**
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="MaziyarPanahi/Goku-8x22B-v0.1")
```
**Load model directly:**
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/Goku-8x22B-v0.1")
model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/Goku-8x22B-v0.1")
```
**Load via Adapter:**
You can also use PEFT to just load the adapter if you already have one of these models downloaded: [v2ray/Mixtral-8x22B-v0.1](https://huggingface.co/v2ray/Mixtral-8x22B-v0.1) or [mistral-community/Mixtral-8x22B-v0.1](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1) (they are the same)
```python
# assuming you have already downloaded the
# resizing the vocab
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id="v2ray/Mixtral-8x22B-v0.1"
peft_model_id = "~/.cache/huggingface/hub/models--MaziyarPanahi--Goku-8x22B-v0.1/adapter"
tokenizer = AutoTokenizer. from_pretrained (peft_model_id)
model = AutoModelForCausalLM. from_pretrained (model_id)
# I have added 2 new tokens for ChatML template
# this step is required if you are using PEFT/Adapter
model.resize_token_embeddings (len (tokenizer))
model.load_adapter(peft_model_id)
# you can even have TextStreamer and a text-generation pipeline with your adapter
streamer = TextStreamer(tokenizer)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=750,
temperature=0.6,
do_sample=True,
top_k=50,
top_p=0.95,
repetition_penalty=1.1,
return_full_text=False,
add_special_tokens=False,
streamer=streamer
)
```
## Examples
`Goku-8x22B-v0.1` has been tested in generating text, answering questions based on long context, coding, and some reasoning. In the next version I will use more `math` and `coding` related datasets.
This is a sample story written by [MaziyarPanahi/Goku-8x22B-v0.1](https://huggingface.co/MaziyarPanahi/Goku-8x22B-v0.1/)
```
Goku had heard a commotion from his house but when he went to check he saw nothing. He thought to himself, "I'll let it go, it was probably just a bird or something. I'm sure it will be fine." But that was when he heard the commotion again, so he went outside and this time he saw two figures on the horizon. One of the figures was a giant pinkish-purple creature, while the other was small, pink, ball-shaped thing.
As the figures approached, Goku realized the large creature was his former enemy, the powerful Majin Buu. And the smaller creature was Kirby, a powerful Star Warrior from the planet Popstar. Goku couldn't believe his eyes.
The two creatures approached Goku menacingly. "Kirby and I have teamed up," said Majin Buu. "We're going to destroy the world!"
Goku was taken aback by the statement. He had never considered the possibility of these two powerful creatures joining forces. He knew he had to put a stop to them, before they could cause any more damage.
He took a deep breath and faced the two creatures. "You two won't get away with this," Goku said firmly. "I won't let you destroy the world."
Majin Buu scoffed, "You can't stop us! Kirby and I are too powerful!"
Goku quickly formed an energy ball in his hands and faced the two creatures. "We'll see about that," he said.
The battle that ensued was intense. The two creatures worked together, using their powerful energy attacks to try to overcome Goku. But Goku kept fighting, using his own powerful energy attacks to counter their moves.
After what seemed like an eternity, Goku managed to get the upper hand. He used a powerful energy attack to defeat the two creatures. After they were defeated, Goku looked around and saw the damage that had been caused by the battle. He knew he still had a lot of work ahead of him in order to prevent any further destruction, but he was determined to do his best.
He summoned all of his power and focused it into a powerful energy attack. The energy spread throughout his body and he felt his power grow stronger. With a battle cry, he launched the attack at the two creatures.
The energy hit them both, sending them flying back, stunned for a moment. Goku continued to pressure them with his energy attacks, but they soon recovered and began to counter-attack with their own energy blasts.
Goku knew he had to act quickly if he was going to defeat them. He focused his energy into one powerful attack, and launched it at Kirby. The attack hit and the Star Warrior was sent flying away.
Goku then focused his attention on Majin Buu. He launched a series of energy attacks, using his signature technique, the Kamehameha, and managed to defeat the powerful creature.
After the battle, Goku looked around at the destruction that had been caused by the two creatures. He knew he still had a lot of work ahead of him in order to prevent any further destruction, but he was determined to do his best.
With the two creatures defeated, Goku knew he still had a job to do. He took a deep breath and set out to repair the damage that had been caused by the two powerful creatures. He worked for hours, using his energy to put everything back in order and ensuring that the world was safe once again.
Goku's journey was long and hard but, in the end, he was successful. He defeated two powerful enemies and saved the world from destroyers. Thanks to his hard work, the world was able to heal and once again become a place of peace and prosperity.
``` |
NikolayKozloff/malaysian-llama-3-8b-instruct-16k-Q8_0-GGUF | NikolayKozloff | 2024-05-14T16:42:43Z | 1 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"ms",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-14T16:42:19Z | ---
language:
- ms
tags:
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/malaysian-llama-3-8b-instruct-16k-Q8_0-GGUF
This model was converted to GGUF format from [`mesolitica/malaysian-llama-3-8b-instruct-16k`](https://huggingface.co/mesolitica/malaysian-llama-3-8b-instruct-16k) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mesolitica/malaysian-llama-3-8b-instruct-16k) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo NikolayKozloff/malaysian-llama-3-8b-instruct-16k-Q8_0-GGUF --model malaysian-llama-3-8b-instruct-16k.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo NikolayKozloff/malaysian-llama-3-8b-instruct-16k-Q8_0-GGUF --model malaysian-llama-3-8b-instruct-16k.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m malaysian-llama-3-8b-instruct-16k.Q8_0.gguf -n 128
```
|
tsavage68/Transaminitis_L3_1000steps_1e5rate_01beta_CSFTDPO | tsavage68 | 2024-05-14T16:40:24Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:tsavage68/Transaminitis_L3_1000rate_1e7_SFT",
"base_model:finetune:tsavage68/Transaminitis_L3_1000rate_1e7_SFT",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T16:34:17Z | ---
license: llama3
base_model: tsavage68/Transaminitis_L3_1000rate_1e7_SFT
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: Transaminitis_L3_1000steps_1e5rate_01beta_CSFTDPO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Transaminitis_L3_1000steps_1e5rate_01beta_CSFTDPO
This model is a fine-tuned version of [tsavage68/Transaminitis_L3_1000rate_1e7_SFT](https://huggingface.co/tsavage68/Transaminitis_L3_1000rate_1e7_SFT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2656
- Rewards/chosen: -5.8394
- Rewards/rejected: -13.5464
- Rewards/accuracies: 0.9500
- Rewards/margins: 7.7070
- Logps/rejected: -154.0191
- Logps/chosen: -76.9285
- Logits/rejected: -1.0971
- Logits/chosen: -1.0952
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.8157 | 0.2 | 25 | 0.7130 | -1.6812 | -1.6422 | 0.2000 | -0.0390 | -34.9765 | -35.3459 | -1.0074 | -1.0078 |
| 0.6531 | 0.4 | 50 | 0.5572 | 1.4920 | 1.0999 | 0.5400 | 0.3921 | -7.5562 | -3.6147 | -0.6331 | -0.6288 |
| 0.0069 | 0.6 | 75 | 0.0638 | 1.5026 | -8.0172 | 0.9900 | 9.5198 | -98.7265 | -3.5080 | -1.0032 | -0.9076 |
| 1.4987 | 0.8 | 100 | 0.7768 | -3.4746 | -3.5322 | 0.5400 | 0.0576 | -53.8765 | -53.2803 | -0.4138 | -0.4136 |
| 0.7987 | 1.0 | 125 | 0.7220 | -3.4829 | -3.5110 | 0.5400 | 0.0281 | -53.6649 | -53.3632 | -0.7087 | -0.7087 |
| 0.7438 | 1.2 | 150 | 0.7114 | -3.2843 | -3.2535 | 0.4600 | -0.0308 | -51.0900 | -51.3775 | -1.0310 | -1.0310 |
| 0.6949 | 1.4 | 175 | 0.7051 | -3.3085 | -3.2855 | 0.4000 | -0.0230 | -51.4100 | -51.6195 | -0.7593 | -0.7593 |
| 0.7 | 1.6 | 200 | 0.7007 | -3.3122 | -3.2981 | 0.4400 | -0.0141 | -51.5352 | -51.6561 | -0.7261 | -0.7261 |
| 0.7004 | 1.8 | 225 | 0.7092 | -3.5268 | -3.5014 | 0.4600 | -0.0254 | -53.5688 | -53.8022 | -1.0639 | -1.0640 |
| 0.7056 | 2.0 | 250 | 0.7048 | -3.3574 | -3.3377 | 0.4800 | -0.0197 | -51.9312 | -52.1080 | -0.8329 | -0.8329 |
| 0.6829 | 2.2 | 275 | 0.6964 | -3.4182 | -3.4152 | 0.5400 | -0.0030 | -52.7066 | -52.7166 | -1.0186 | -1.0187 |
| 0.7101 | 2.4 | 300 | 0.6992 | -4.3808 | -4.3804 | 0.5400 | -0.0003 | -62.3591 | -62.3421 | -1.3638 | -1.3638 |
| 0.7107 | 2.6 | 325 | 0.7081 | -4.1483 | -4.1266 | 0.4600 | -0.0217 | -59.8212 | -60.0177 | -1.3589 | -1.3589 |
| 0.7035 | 2.8 | 350 | 0.6913 | -3.0909 | -3.0966 | 0.2900 | 0.0058 | -49.5212 | -49.4432 | -0.7017 | -0.7017 |
| 0.7112 | 3.0 | 375 | 0.7096 | -4.4207 | -4.3939 | 0.4600 | -0.0268 | -62.4938 | -62.7416 | -1.3752 | -1.3752 |
| 0.659 | 3.2 | 400 | 0.7992 | -4.2280 | -4.1290 | 0.5200 | -0.0990 | -59.8449 | -60.8146 | -1.0809 | -1.0815 |
| 0.6253 | 3.4 | 425 | 0.9164 | -4.3837 | -4.1124 | 0.5200 | -0.2713 | -59.6787 | -62.3715 | -0.7324 | -0.7317 |
| 0.956 | 3.6 | 450 | 0.5266 | -3.8419 | -5.4570 | 0.6800 | 1.6151 | -73.1246 | -56.9532 | -0.3747 | -0.3742 |
| 0.5604 | 3.8 | 475 | 0.6506 | -3.5933 | -6.2168 | 0.7000 | 2.6234 | -80.7223 | -54.4675 | -0.1960 | -0.1952 |
| 0.8776 | 4.0 | 500 | 0.5657 | -3.9281 | -7.0564 | 0.8400 | 3.1284 | -89.1191 | -57.8147 | -0.6674 | -0.6680 |
| 0.4978 | 4.2 | 525 | 0.6285 | -4.8602 | -10.3518 | 0.8800 | 5.4916 | -122.0728 | -67.1361 | -0.9244 | -0.9236 |
| 1.0258 | 4.4 | 550 | 0.6966 | -5.0528 | -8.7895 | 0.8000 | 3.7367 | -106.4495 | -69.0625 | -0.6216 | -0.6205 |
| 0.3559 | 4.6 | 575 | 0.6527 | -5.5366 | -9.7092 | 0.8100 | 4.1726 | -115.6466 | -73.9002 | -1.1615 | -1.1603 |
| 0.2236 | 4.8 | 600 | 0.3743 | -5.2783 | -10.8881 | 0.9100 | 5.6099 | -127.4360 | -71.3169 | -1.0731 | -1.0714 |
| 0.0995 | 5.0 | 625 | 0.1816 | -4.6140 | -10.2504 | 0.9500 | 5.6364 | -121.0588 | -64.6745 | -1.0550 | -1.0504 |
| 0.4954 | 5.2 | 650 | 0.2771 | -4.9474 | -10.6256 | 0.9000 | 5.6781 | -124.8103 | -68.0087 | -0.9020 | -0.9007 |
| 0.2031 | 5.4 | 675 | 0.2731 | -5.6955 | -12.6949 | 0.9600 | 6.9994 | -145.5037 | -75.4888 | -1.0406 | -1.0388 |
| 0.3665 | 5.6 | 700 | 0.2912 | -5.5615 | -11.9434 | 0.9300 | 6.3819 | -137.9883 | -74.1489 | -0.9311 | -0.9288 |
| 0.132 | 5.8 | 725 | 0.2410 | -6.2707 | -13.3387 | 0.9400 | 7.0680 | -151.9420 | -81.2413 | -1.0742 | -1.0720 |
| 0.1044 | 6.0 | 750 | 0.2450 | -6.0942 | -13.2397 | 0.9500 | 7.1455 | -150.9520 | -79.4765 | -1.0715 | -1.0693 |
| 0.1984 | 6.2 | 775 | 0.2646 | -6.1961 | -13.4718 | 0.9500 | 7.2757 | -153.2727 | -80.4953 | -1.0771 | -1.0748 |
| 0.0156 | 6.4 | 800 | 0.3140 | -6.1100 | -13.6377 | 0.9500 | 7.5277 | -154.9315 | -79.6341 | -1.1101 | -1.1082 |
| 0.2682 | 6.6 | 825 | 0.2528 | -5.9327 | -13.5268 | 0.9600 | 7.5942 | -153.8231 | -77.8608 | -1.0893 | -1.0873 |
| 0.0011 | 6.8 | 850 | 0.2762 | -5.9315 | -13.5461 | 0.9500 | 7.6146 | -154.0158 | -77.8491 | -1.0916 | -1.0895 |
| 0.1031 | 7.0 | 875 | 0.2613 | -5.8587 | -13.5305 | 0.9500 | 7.6718 | -153.8600 | -77.1214 | -1.0933 | -1.0913 |
| 0.0034 | 7.2 | 900 | 0.2675 | -5.8590 | -13.5490 | 0.9500 | 7.6900 | -154.0449 | -77.1244 | -1.0975 | -1.0955 |
| 0.1314 | 7.4 | 925 | 0.2662 | -5.8482 | -13.5520 | 0.9500 | 7.7038 | -154.0743 | -77.0162 | -1.0978 | -1.0958 |
| 0.3318 | 7.6 | 950 | 0.2651 | -5.8403 | -13.5464 | 0.9500 | 7.7060 | -154.0184 | -76.9377 | -1.0974 | -1.0954 |
| 0.1093 | 7.8 | 975 | 0.2653 | -5.8449 | -13.5488 | 0.9500 | 7.7039 | -154.0427 | -76.9835 | -1.0977 | -1.0957 |
| 0.1808 | 8.0 | 1000 | 0.2656 | -5.8394 | -13.5464 | 0.9500 | 7.7070 | -154.0191 | -76.9285 | -1.0971 | -1.0952 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.0.0+cu117
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Litzy619/O0513MA | Litzy619 | 2024-05-14T16:39:10Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:allenai/OLMo-1B",
"base_model:finetune:allenai/OLMo-1B",
"license:apache-2.0",
"region:us"
] | null | 2024-05-14T04:28:24Z | ---
license: apache-2.0
base_model: allenai/OLMo-1B
tags:
- generated_from_trainer
model-index:
- name: O0513MA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# O0513MA
This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1456
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 80
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.2491 | 0.09 | 10 | 1.4949 |
| 0.7419 | 0.18 | 20 | 0.2008 |
| 0.1742 | 0.27 | 30 | 0.1637 |
| 0.1554 | 0.36 | 40 | 0.1571 |
| 0.1519 | 0.45 | 50 | 0.1515 |
| 0.1531 | 0.54 | 60 | 0.1494 |
| 0.1497 | 0.63 | 70 | 0.1489 |
| 0.1488 | 0.73 | 80 | 0.1584 |
| 0.148 | 0.82 | 90 | 0.1510 |
| 0.1476 | 0.91 | 100 | 0.1509 |
| 0.1499 | 1.0 | 110 | 0.1486 |
| 0.1456 | 1.09 | 120 | 0.1507 |
| 0.1447 | 1.18 | 130 | 0.1518 |
| 0.1472 | 1.27 | 140 | 0.1486 |
| 0.148 | 1.36 | 150 | 0.1490 |
| 0.1455 | 1.45 | 160 | 0.1487 |
| 0.1463 | 1.54 | 170 | 0.1473 |
| 0.1475 | 1.63 | 180 | 0.1475 |
| 0.1479 | 1.72 | 190 | 0.1505 |
| 0.1454 | 1.81 | 200 | 0.1487 |
| 0.1499 | 1.9 | 210 | 0.1480 |
| 0.1474 | 1.99 | 220 | 0.1498 |
| 0.1464 | 2.08 | 230 | 0.1472 |
| 0.1401 | 2.18 | 240 | 0.1462 |
| 0.1419 | 2.27 | 250 | 0.1483 |
| 0.1426 | 2.36 | 260 | 0.1477 |
| 0.141 | 2.45 | 270 | 0.1461 |
| 0.1402 | 2.54 | 280 | 0.1468 |
| 0.1393 | 2.63 | 290 | 0.1469 |
| 0.1426 | 2.72 | 300 | 0.1455 |
| 0.1417 | 2.81 | 310 | 0.1454 |
| 0.1408 | 2.9 | 320 | 0.1456 |
| 0.1424 | 2.99 | 330 | 0.1456 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.0
|
MaziyarPanahi/Llama-3-70B-Instruct-v0.1 | MaziyarPanahi | 2024-05-14T16:38:13Z | 23 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"axolotl",
"finetune",
"facebook",
"meta",
"pytorch",
"llama-3",
"chatml",
"conversational",
"en",
"dataset:MaziyarPanahi/truthy-dpo-v0.1-axolotl",
"base_model:meta-llama/Meta-Llama-3-70B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-70B-Instruct",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-14T14:23:52Z | ---
language:
- en
license: llama3
library_name: transformers
tags:
- axolotl
- finetune
- facebook
- meta
- pytorch
- llama
- llama-3
- chatml
base_model: meta-llama/Meta-Llama-3-70B-Instruct
datasets:
- MaziyarPanahi/truthy-dpo-v0.1-axolotl
model_name: Llama-3-70B-Instruct-v0.1
pipeline_tag: text-generation
license_name: llama3
license_link: LICENSE
inference: false
model_creator: MaziyarPanahi
quantized_by: MaziyarPanahi
---
<img src="./llama-3-merges.webp" alt="Llama-3 DPO Logo" width="500" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# MaziyarPanahi/Llama-3-70B-Instruct-v0.1
This model is a fine-tune of `meta-llama/Meta-Llama-3-70B-Instruct` model. This version comes with `<|im_start|>` and `<|im_end|>` as extra tokens to avoid taking up extra tokens via ChatML prompt.
# ⚡ Quantized GGUF
All GGUF models are available here: [MaziyarPanahi/Llama-3-70B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-70B-Instruct-v0.1-GGUF)
# 🏆 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
coming soon.
# Prompt Template
This model uses `ChatML` prompt template:
```
<|im_start|>system
{System}
<|im_end|>
<|im_start|>user
{User}
<|im_end|>
<|im_start|>assistant
{Assistant}
````
# How to use
You can use this model by using `MaziyarPanahi/Llama-3-70B-Instruct-v0.1` as the model name in Hugging Face's
transformers library.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
from transformers import pipeline
import torch
model_id = "MaziyarPanahi/Llama-3-70B-Instruct-v0.1"
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True,
# attn_implementation="flash_attention_2"
)
tokenizer = AutoTokenizer.from_pretrained(
model_id,
trust_remote_code=True
)
streamer = TextStreamer(tokenizer)
pipeline = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
model_kwargs={"torch_dtype": torch.bfloat16},
streamer=streamer
)
# Then you can use the pipeline to generate text.
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=2048,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.95,
)
print(outputs[0]["generated_text"][len(prompt):])
``` |
SotirisLegkas/Llama3_ALL_BCE_translations_19_shuffled_special_tokens | SotirisLegkas | 2024-05-14T16:28:10Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2024-05-14T16:27:29Z | ---
license: llama3
library_name: peft
tags:
- generated_from_trainer
base_model: meta-llama/Meta-Llama-3-8B-Instruct
model-index:
- name: Llama3_ALL_BCE_translations_19_shuffled_special_tokens
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama3_ALL_BCE_translations_19_shuffled_special_tokens
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4776
- F1 Macro 0.1: 0.0818
- F1 Macro 0.15: 0.0922
- F1 Macro 0.2: 0.1027
- F1 Macro 0.25: 0.1130
- F1 Macro 0.3: 0.1230
- F1 Macro 0.35: 0.1336
- F1 Macro 0.4: 0.1440
- F1 Macro 0.45: 0.1551
- F1 Macro 0.5: 0.1663
- F1 Macro 0.55: 0.1778
- F1 Macro 0.6: 0.1879
- F1 Macro 0.65: 0.1987
- F1 Macro 0.7: 0.2090
- F1 Macro 0.75: 0.2178
- F1 Macro 0.8: 0.2211
- F1 Macro 0.85: 0.2205
- F1 Macro 0.9: 0.2010
- F1 Macro 0.95: 0.1457
- Threshold 0: 0.65
- Threshold 1: 0.75
- Threshold 2: 0.7
- Threshold 3: 0.85
- Threshold 4: 0.8
- Threshold 5: 0.85
- Threshold 6: 0.8
- Threshold 7: 0.8
- Threshold 8: 0.85
- Threshold 9: 0.75
- Threshold 10: 0.85
- Threshold 11: 0.8
- Threshold 12: 0.85
- Threshold 13: 0.95
- Threshold 14: 0.85
- Threshold 15: 0.75
- Threshold 16: 0.85
- Threshold 17: 0.8
- Threshold 18: 0.9
- 0: 0.0619
- 1: 0.1388
- 2: 0.1978
- 3: 0.1328
- 4: 0.2961
- 5: 0.3489
- 6: 0.3179
- 7: 0.1268
- 8: 0.2043
- 9: 0.3668
- 10: 0.3216
- 11: 0.3669
- 12: 0.1276
- 13: 0.1205
- 14: 0.2264
- 15: 0.1576
- 16: 0.3078
- 17: 0.3722
- 18: 0.125
- Max F1: 0.2211
- Mean F1: 0.2273
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 2024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Macro 0.1 | F1 Macro 0.15 | F1 Macro 0.2 | F1 Macro 0.25 | F1 Macro 0.3 | F1 Macro 0.35 | F1 Macro 0.4 | F1 Macro 0.45 | F1 Macro 0.5 | F1 Macro 0.55 | F1 Macro 0.6 | F1 Macro 0.65 | F1 Macro 0.7 | F1 Macro 0.75 | F1 Macro 0.8 | F1 Macro 0.85 | F1 Macro 0.9 | F1 Macro 0.95 | Threshold 0 | Threshold 1 | Threshold 2 | Threshold 3 | Threshold 4 | Threshold 5 | Threshold 6 | Threshold 7 | Threshold 8 | Threshold 9 | Threshold 10 | Threshold 11 | Threshold 12 | Threshold 13 | Threshold 14 | Threshold 15 | Threshold 16 | Threshold 17 | Threshold 18 | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | Max F1 | Mean F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------------:|:-------------:|:------------:|:-------------:|:------------:|:-------------:|:------------:|:-------------:|:------------:|:-------------:|:------------:|:-------------:|:------------:|:-------------:|:------------:|:-------------:|:------------:|:-------------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:------------:|:------------:|:------------:|:------------:|:------------:|:------------:|:------------:|:------------:|:------------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:-------:|
| 3.3824 | 1.0 | 5595 | 4.3847 | 0.0700 | 0.0761 | 0.0818 | 0.0877 | 0.0936 | 0.1000 | 0.1064 | 0.1134 | 0.1196 | 0.1265 | 0.1327 | 0.1381 | 0.1432 | 0.1483 | 0.1465 | 0.1417 | 0.1291 | 0.0836 | 0.65 | 0.9 | 0.85 | 0.9 | 0.75 | 0.6 | 0.8 | 0.75 | 0.9 | 0.9 | 0.9 | 0.85 | 0.9 | 0.0 | 0.85 | 0.75 | 0.6 | 0.6 | 0.9 | 0.0649 | 0.0879 | 0.1603 | 0.0899 | 0.2589 | 0.2876 | 0.2683 | 0.1036 | 0.1245 | 0.2856 | 0.2387 | 0.3033 | 0.0726 | 0.0 | 0.1779 | 0.1109 | 0.2192 | 0.2743 | 0.0641 | 0.1483 | 0.1680 |
| 2.4859 | 2.0 | 11190 | 1.7537 | 0.0881 | 0.0994 | 0.1111 | 0.1210 | 0.1310 | 0.1401 | 0.1472 | 0.1541 | 0.1607 | 0.1676 | 0.1697 | 0.1731 | 0.1768 | 0.1761 | 0.1713 | 0.1575 | 0.1365 | 0.0927 | 0.55 | 0.7 | 0.85 | 0.8 | 0.4 | 0.35 | 0.95 | 0.75 | 0.7 | 0.85 | 0.8 | 0.65 | 0.8 | 0.95 | 0.8 | 0.7 | 0.85 | 0.6 | 0.75 | 0.0534 | 0.1241 | 0.1924 | 0.1020 | 0.2738 | 0.3163 | 0.3072 | 0.1109 | 0.1793 | 0.3414 | 0.2889 | 0.3332 | 0.0831 | 0.0870 | 0.2137 | 0.1305 | 0.2881 | 0.3396 | 0.1254 | 0.1768 | 0.2048 |
| 1.7561 | 3.0 | 16785 | 1.4633 | 0.0840 | 0.0954 | 0.1062 | 0.1164 | 0.1271 | 0.1382 | 0.1485 | 0.1597 | 0.1713 | 0.1809 | 0.1895 | 0.1976 | 0.2056 | 0.2113 | 0.2115 | 0.1995 | 0.1805 | 0.1184 | 0.6 | 0.75 | 0.75 | 0.95 | 0.8 | 0.7 | 0.9 | 0.8 | 0.8 | 0.7 | 0.8 | 0.8 | 0.9 | 0.95 | 0.75 | 0.8 | 0.7 | 0.7 | 0.8 | 0.0581 | 0.1395 | 0.1946 | 0.1235 | 0.2818 | 0.3391 | 0.3151 | 0.1202 | 0.1997 | 0.3656 | 0.3056 | 0.3630 | 0.1340 | 0.1087 | 0.2272 | 0.1482 | 0.2953 | 0.3589 | 0.1233 | 0.2115 | 0.2211 |
| 1.2709 | 4.0 | 22380 | 1.4776 | 0.0818 | 0.0922 | 0.1027 | 0.1130 | 0.1230 | 0.1336 | 0.1440 | 0.1551 | 0.1663 | 0.1778 | 0.1879 | 0.1987 | 0.2090 | 0.2178 | 0.2211 | 0.2205 | 0.2010 | 0.1457 | 0.65 | 0.75 | 0.7 | 0.85 | 0.8 | 0.85 | 0.8 | 0.8 | 0.85 | 0.75 | 0.85 | 0.8 | 0.85 | 0.95 | 0.85 | 0.75 | 0.85 | 0.8 | 0.9 | 0.0619 | 0.1388 | 0.1978 | 0.1328 | 0.2961 | 0.3489 | 0.3179 | 0.1268 | 0.2043 | 0.3668 | 0.3216 | 0.3669 | 0.1276 | 0.1205 | 0.2264 | 0.1576 | 0.3078 | 0.3722 | 0.125 | 0.2211 | 0.2273 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.2
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1 |
r4fall/Meta-Llama-3-8B-Instruct-pl | r4fall | 2024-05-14T16:27:55Z | 4 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:NousResearch/Meta-Llama-3-8B-Instruct",
"base_model:adapter:NousResearch/Meta-Llama-3-8B-Instruct",
"region:us"
] | null | 2024-05-14T16:27:04Z | ---
library_name: peft
base_model: NousResearch/Meta-Llama-3-8B-Instruct
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 |
DUAL-GPO/phi-2-gpo-v21-i1 | DUAL-GPO | 2024-05-14T16:23:44Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"phi",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"custom_code",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:DUAL-GPO/phi-2-gpo-new-i0",
"base_model:adapter:DUAL-GPO/phi-2-gpo-new-i0",
"license:mit",
"region:us"
] | null | 2024-05-14T15:31:00Z | ---
license: mit
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
base_model: DUAL-GPO/phi-2-gpo-new-i0
datasets:
- HuggingFaceH4/ultrafeedback_binarized
model-index:
- name: phi-2-gpo-v21-i1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2-gpo-v21-i1
This model is a fine-tuned version of [DUAL-GPO/phi-2-gpo-new-i0](https://huggingface.co/DUAL-GPO/phi-2-gpo-new-i0) on the HuggingFaceH4/ultrafeedback_binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.2.1+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2 |
stephenimm/my_awesome_eli5_mlm_model | stephenimm | 2024-05-14T16:17:44Z | 116 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"dataset:eli5_category",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-14T14:46:54Z | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: distilbert/distilroberta-base
datasets:
- eli5_category
model-index:
- name: my_awesome_eli5_mlm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_mlm_model
This model is a fine-tuned version of [distilbert/distilroberta-base](https://huggingface.co/distilbert/distilroberta-base) on the eli5_category dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0168
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2612 | 1.0 | 1311 | 2.0833 |
| 2.1651 | 2.0 | 2622 | 2.0288 |
| 2.1274 | 3.0 | 3933 | 2.0364 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0+cpu
- Datasets 2.19.1
- Tokenizers 0.19.1
|
maisonmargela/gpt2_code_writer | maisonmargela | 2024-05-14T16:14:36Z | 147 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T16:13:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
wenshicheng97/no_board_history_with_sys_history_cicero_lr5e-5_batch10 | wenshicheng97 | 2024-05-14T16:14:12Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"license:llama2",
"region:us"
] | null | 2024-05-14T05:48:50Z | ---
license: llama2
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: meta-llama/Llama-2-7b-chat-hf
model-index:
- name: no_board_history_with_sys_history_cicero_lr5e-5_batch10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# no_board_history_with_sys_history_cicero_lr5e-5_batch10
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.2
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.2 |
iony-mikler/q-Taxi-v3 | iony-mikler | 2024-05-14T16:13:03Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-14T15:38:45Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="iony-mikler/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
DucPhanBa/llama2-finetuned-qlora | DucPhanBa | 2024-05-14T16:12:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-14T16:11:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
oasic/merged-4bit-tiny-llama-gc2 | oasic | 2024-05-14T16:11:39Z | 79 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/tinyllama-bnb-4bit",
"base_model:quantized:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-14T16:10:26Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/tinyllama-bnb-4bit
---
# Uploaded model
- **Developed by:** oasic
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
veritober/clasificador-muchocine | veritober | 2024-05-14T16:09:46Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:mrm8488/electricidad-base-discriminator",
"base_model:finetune:mrm8488/electricidad-base-discriminator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-14T16:09:28Z | ---
base_model: mrm8488/electricidad-base-discriminator
tags:
- classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: clasificador-muchocine
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3716
- Accuracy: 0.4465
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.3597 | 0.3794 |
| 1.4242 | 2.0 | 776 | 1.3048 | 0.4374 |
| 1.0638 | 3.0 | 1164 | 1.3716 | 0.4465 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Meziane/three_question | Meziane | 2024-05-14T16:07:59Z | 131 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"question-answering",
"generated_from_trainer",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-05-14T16:02:48Z | ---
license: apache-2.0
base_model: facebook/bart-base
tags:
- generated_from_trainer
model-index:
- name: three_question
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# three_question
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 100 | nan |
| No log | 2.0 | 200 | nan |
| No log | 3.0 | 300 | nan |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.13.3
|
PabloMiguelGarcia/clasificador-muchocine | PabloMiguelGarcia | 2024-05-14T16:04:59Z | 104 | 0 | transformers | [
"transformers",
"safetensors",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:mrm8488/electricidad-base-discriminator",
"base_model:finetune:mrm8488/electricidad-base-discriminator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-14T16:04:08Z | ---
base_model: mrm8488/electricidad-base-discriminator
tags:
- classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: clasificador-muchocine
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4110
- Accuracy: 0.4490
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.3473 | 0.3884 |
| 1.3955 | 2.0 | 776 | 1.3101 | 0.4465 |
| 1.0263 | 3.0 | 1164 | 1.4110 | 0.4490 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
BuroIdentidadDigital/Ine_FrontalV0 | BuroIdentidadDigital | 2024-05-14T16:01:01Z | 48 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-05-14T15:51:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Samael667/my-autotrain-llm | Samael667 | 2024-05-14T15:56:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T15:56:08Z | ---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
DUAL-GPO/zephyr-7b-gpo-v5-i3 | DUAL-GPO | 2024-05-14T15:53:15Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"mistral",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:DUAL-GPO-2/zephyr-7b-irepo-new-i2",
"base_model:adapter:DUAL-GPO-2/zephyr-7b-irepo-new-i2",
"license:apache-2.0",
"region:us"
] | null | 2024-05-14T09:24:48Z | ---
license: apache-2.0
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
base_model: DUAL-GPO-2/zephyr-7b-irepo-new-i2
datasets:
- HuggingFaceH4/ultrafeedback_binarized
model-index:
- name: zephyr-7b-gpo-v5-i3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-gpo-v5-i3
This model is a fine-tuned version of [DUAL-GPO-2/zephyr-7b-irepo-new-i2](https://huggingface.co/DUAL-GPO-2/zephyr-7b-irepo-new-i2) on the HuggingFaceH4/ultrafeedback_binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2 |
jdumasleon/clasificador-muchocine | jdumasleon | 2024-05-14T15:51:16Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:mrm8488/electricidad-base-discriminator",
"base_model:finetune:mrm8488/electricidad-base-discriminator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-14T15:50:56Z | ---
base_model: mrm8488/electricidad-base-discriminator
tags:
- classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: clasificador-muchocine
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4260
- Accuracy: 0.4348
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.3400 | 0.3923 |
| 1.3934 | 2.0 | 776 | 1.2767 | 0.4503 |
| 0.9927 | 3.0 | 1164 | 1.4260 | 0.4348 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
EyaZr/my_code_dataset | EyaZr | 2024-05-14T15:51:02Z | 145 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T15:46:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ashishkgpian/best_mistral_model | ashishkgpian | 2024-05-14T15:48:49Z | 79 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"astronomy",
"conversational",
"en",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-14T15:43:42Z | ---
library_name: transformers
tags:
- astronomy
license: apache-2.0
language:
- en
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Aryan0310/t5-small-finetuned-cnn-daily | Aryan0310 | 2024-05-14T15:47:52Z | 14 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-14T09:34:58Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-cnn-daily
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnn-daily
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6925
- Rouge1: 24.4516
- Rouge2: 11.7206
- Rougel: 20.1946
- Rougelsum: 23.0597
- Gen Len: 18.9996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.8533 | 1.0 | 17945 | 1.6925 | 24.4516 | 11.7206 | 20.1946 | 23.0597 | 18.9996 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
MiVaCod/rotten | MiVaCod | 2024-05-14T15:46:41Z | 108 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-06T17:44:35Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: rotten
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rotten
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8598
- Accuracy: 0.8527
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.405 | 1.0 | 1067 | 0.3657 | 0.8546 |
| 0.225 | 2.0 | 2134 | 0.7075 | 0.8433 |
| 0.0711 | 3.0 | 3201 | 0.8598 | 0.8527 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
shull/whisper-small-finetuned-v5en | shull | 2024-05-14T15:45:12Z | 89 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-14T06:45:44Z | ---
language:
- en
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper small v5-en finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper small v5-en finetuned
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the my_audio_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1560
- Wer: 5.6915
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- training_steps: 3000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 0.1138 | 5.8309 | 1000 | 0.1326 | 6.3035 |
| 0.004 | 11.6618 | 2000 | 0.1507 | 5.7015 |
| 0.0014 | 17.4927 | 3000 | 0.1560 | 5.6915 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
mradermacher/llama-3-spicy-8B-GGUF | mradermacher | 2024-05-14T15:41:18Z | 60 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:nbeerbower/llama-3-spicy-8B",
"base_model:quantized:nbeerbower/llama-3-spicy-8B",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-14T14:39:17Z | ---
base_model: nbeerbower/llama-3-spicy-8B
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hfhfix -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/nbeerbower/llama-3-spicy-8B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama-3-spicy-8B-GGUF/resolve/main/llama-3-spicy-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-spicy-8B-GGUF/resolve/main/llama-3-spicy-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-spicy-8B-GGUF/resolve/main/llama-3-spicy-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-spicy-8B-GGUF/resolve/main/llama-3-spicy-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/llama-3-spicy-8B-GGUF/resolve/main/llama-3-spicy-8B.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-spicy-8B-GGUF/resolve/main/llama-3-spicy-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-spicy-8B-GGUF/resolve/main/llama-3-spicy-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-spicy-8B-GGUF/resolve/main/llama-3-spicy-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-spicy-8B-GGUF/resolve/main/llama-3-spicy-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-3-spicy-8B-GGUF/resolve/main/llama-3-spicy-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-3-spicy-8B-GGUF/resolve/main/llama-3-spicy-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-spicy-8B-GGUF/resolve/main/llama-3-spicy-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-spicy-8B-GGUF/resolve/main/llama-3-spicy-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-spicy-8B-GGUF/resolve/main/llama-3-spicy-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-spicy-8B-GGUF/resolve/main/llama-3-spicy-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Trelis/OpenELM-450M-instruct-ORPO | Trelis | 2024-05-14T15:40:06Z | 160 | 0 | transformers | [
"transformers",
"safetensors",
"openelm",
"text-generation",
"apple",
"OpenELM",
"conversational",
"custom_code",
"dataset:argilla/dpo-mix-7k",
"arxiv:2404.14619",
"license:other",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-05-14T15:25:06Z | ---
license: other
license_name: apple-sample-code-license
license_link: LICENSE
datasets:
- argilla/dpo-mix-7k
tags:
- apple
- OpenELM
---
# OpenELM
These are ORPO fine-tunes, done using the Argilla/dpo-mix-7k dataset:
- [270M fine-tune](https://huggingface.co/Trelis/OpenELM-270M-instruct-ORPO)
- [450M fine-tune](https://huggingface.co/Trelis/OpenELM-450M-instruct-ORPO)
## Performance notes
OpenELM models are quite weak.
- OpenELM 270M is uniquely small, but weak.
- OpenELM 450M improves a little over the 270M model, but remains weak on accuracy and hallucinates strongly.
- Qwen 1.5 0.5B is stronger than the OpenELM model.
- TinyLlama is stronger than OpenELM 1B.
- Models like Phi-3 are stronger than OpenELM 3B.
## Usage Notes
- Flash attention is not supported
- Making GGUFs is not [yet supported](https://github.com/ggerganov/llama.cpp/issues/6868)
## Inference
See [this Colab Notebook](https://colab.research.google.com/drive/1vFMRhHdPyUxbZAlRWwyl79NwnrSz_yQL?usp=sharing)
~~~
The original model card follows below.
~~~
*Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari*
We introduce **OpenELM**, a family of **Open** **E**fficient **L**anguage **M**odels. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy. We pretrained OpenELM models using the [CoreNet](https://github.com/apple/corenet) library. We release both pretrained and instruction tuned models with 270M, 450M, 1.1B and 3B parameters.
Our pre-training dataset contains RefinedWeb, deduplicated PILE, a subset of RedPajama, and a subset of Dolma v1.6, totaling approximately 1.8 trillion tokens. Please check license agreements and terms of these datasets before using them.
## Usage
We have provided an example function to generate output from OpenELM models loaded via [HuggingFace Hub](https://huggingface.co/docs/hub/) in `generate_openelm.py`.
You can try the model by running the following command:
```
python generate_openelm.py --model apple/OpenELM-450M-Instruct --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2
```
Please refer to [this link](https://huggingface.co/docs/hub/security-tokens) to obtain your hugging face access token.
Additional arguments to the hugging face generate function can be passed via `generate_kwargs`. As an example, to speedup the inference, you can try [lookup token speculative generation](https://huggingface.co/docs/transformers/generation_strategies) by passing the `prompt_lookup_num_tokens` argument as follows:
```
python generate_openelm.py --model apple/OpenELM-450M-Instruct --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 prompt_lookup_num_tokens=10
```
Alternatively, try model-wise speculative generation with an [assistive model](https://huggingface.co/blog/assisted-generation) by passing a smaller model through the `assistant_model` argument, for example:
```
python generate_openelm.py --model apple/OpenELM-450M-Instruct --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 --assistant_model [SMALLER_MODEL]
```
## Main Results
### Zero-Shot
| **Model Size** | **ARC-c** | **ARC-e** | **BoolQ** | **HellaSwag** | **PIQA** | **SciQ** | **WinoGrande** | **Average** |
|-----------------------------------------------------------------------------|-----------|-----------|-----------|---------------|-----------|-----------|----------------|-------------|
| [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 26.45 | 45.08 | **53.98** | 46.71 | 69.75 | **84.70** | **53.91** | 54.37 |
| [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **30.55** | **46.68** | 48.56 | **52.07** | **70.78** | 84.40 | 52.72 | **55.11** |
| [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 27.56 | 48.06 | 55.78 | 53.97 | 72.31 | 87.20 | 58.01 | 57.56 |
| [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **30.38** | **50.00** | **60.37** | **59.34** | **72.63** | **88.00** | **58.96** | **59.95** |
| [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 32.34 | **55.43** | 63.58 | 64.81 | **75.57** | **90.60** | 61.72 | 63.44 |
| [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **37.97** | 52.23 | **70.00** | **71.20** | 75.03 | 89.30 | **62.75** | **65.50** |
| [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 35.58 | 59.89 | 67.40 | 72.44 | 78.24 | **92.70** | 65.51 | 67.39 |
| [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **39.42** | **61.74** | **68.17** | **76.36** | **79.00** | 92.50 | **66.85** | **69.15** |
### LLM360
| **Model Size** | **ARC-c** | **HellaSwag** | **MMLU** | **TruthfulQA** | **WinoGrande** | **Average** |
|-----------------------------------------------------------------------------|-----------|---------------|-----------|----------------|----------------|-------------|
| [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | 47.15 | 25.72 | **39.24** | **53.83** | 38.72 |
| [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | **51.58** | **26.70** | 38.72 | 53.20 | **40.54** |
| [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | 53.86 | **26.01** | 40.18 | 57.22 | 41.50 |
| [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | **59.31** | 25.41 | **40.48** | **58.33** | **43.41** |
| [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | 65.71 | **27.05** | 36.98 | 63.22 | 45.93 |
| [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | **71.83** | 25.65 | **45.95** | **64.72** | **49.94** |
| [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | 73.28 | **26.76** | 34.98 | 67.25 | 48.90 |
| [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | **76.87** | 24.80 | **38.76** | **67.96** | **51.22** |
### OpenLLM Leaderboard
| **Model Size** | **ARC-c** | **CrowS-Pairs** | **HellaSwag** | **MMLU** | **PIQA** | **RACE** | **TruthfulQA** | **WinoGrande** | **Average** |
|-----------------------------------------------------------------------------|-----------|-----------------|---------------|-----------|-----------|-----------|----------------|----------------|-------------|
| [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | **66.79** | 47.15 | 25.72 | 69.75 | 30.91 | **39.24** | **53.83** | 45.13 |
| [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | 66.01 | **51.58** | **26.70** | **70.78** | 33.78 | 38.72 | 53.20 | **46.66** |
| [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | **68.63** | 53.86 | **26.01** | 72.31 | 33.11 | 40.18 | 57.22 | 47.69 |
| [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | 67.44 | **59.31** | 25.41 | **72.63** | **36.84** | **40.48** | **58.33** | **49.25** |
| [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | **71.74** | 65.71 | **27.05** | **75.57** | 36.46 | 36.98 | 63.22 | 51.68 |
| [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | 71.02 | **71.83** | 25.65 | 75.03 | **39.43** | **45.95** | **64.72** | **54.40** |
| [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | **73.29** | 73.28 | **26.76** | 78.24 | **38.76** | 34.98 | 67.25 | 54.35 |
| [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | 72.33 | **76.87** | 24.80 | **79.00** | 38.47 | **38.76** | **67.96** | **55.73** |
See the technical report for more results and comparison.
## Evaluation
### Setup
Install the following dependencies:
```bash
# install public lm-eval-harness
harness_repo="public-lm-eval-harness"
git clone https://github.com/EleutherAI/lm-evaluation-harness ${harness_repo}
cd ${harness_repo}
# use main branch on 03-15-2024, SHA is dc90fec
git checkout dc90fec
pip install -e .
cd ..
# 66d6242 is the main branch on 2024-04-01
pip install datasets@git+https://github.com/huggingface/datasets.git@66d6242
pip install tokenizers>=0.15.2 transformers>=4.38.2 sentencepiece>=0.2.0
```
### Evaluate OpenELM
```bash
# OpenELM-450M-Instruct
hf_model=apple/OpenELM-450M-Instruct
# this flag is needed because lm-eval-harness set add_bos_token to False by default, but OpenELM uses LLaMA tokenizer which requires add_bos_token to be True
tokenizer=meta-llama/Llama-2-7b-hf
add_bos_token=True
batch_size=1
mkdir lm_eval_output
shot=0
task=arc_challenge,arc_easy,boolq,hellaswag,piqa,race,winogrande,sciq,truthfulqa_mc2
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
shot=5
task=mmlu,winogrande
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
shot=25
task=arc_challenge,crows_pairs_english
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
shot=10
task=hellaswag
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
```
## Bias, Risks, and Limitations
The release of OpenELM models aims to empower and enrich the open research community by providing access to state-of-the-art language models. Trained on publicly available datasets, these models are made available without any safety guarantees. Consequently, there exists the possibility of these models producing outputs that are inaccurate, harmful, biased, or objectionable in response to user prompts. Thus, it is imperative for users and developers to undertake thorough safety testing and implement appropriate filtering mechanisms tailored to their specific requirements.
## Citation
If you find our work useful, please cite:
```BibTex
@article{mehtaOpenELMEfficientLanguage2024,
title = {{OpenELM}: {An} {Efficient} {Language} {Model} {Family} with {Open} {Training} and {Inference} {Framework}},
shorttitle = {{OpenELM}},
url = {https://arxiv.org/abs/2404.14619v1},
language = {en},
urldate = {2024-04-24},
journal = {arXiv.org},
author = {Mehta, Sachin and Sekhavat, Mohammad Hossein and Cao, Qingqing and Horton, Maxwell and Jin, Yanzi and Sun, Chenfan and Mirzadeh, Iman and Najibi, Mahyar and Belenko, Dmitry and Zatloukal, Peter and Rastegari, Mohammad},
month = apr,
year = {2024},
}
@inproceedings{mehta2022cvnets,
author = {Mehta, Sachin and Abdolhosseini, Farzad and Rastegari, Mohammad},
title = {CVNets: High Performance Library for Computer Vision},
year = {2022},
booktitle = {Proceedings of the 30th ACM International Conference on Multimedia},
series = {MM '22}
}
```
|
reemmasoud/idv_vs_col_llama-3_PromptTuning_CAUSAL_LM_gradient_descent_v1 | reemmasoud | 2024-05-14T15:39:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-14T15:39:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
terry69/mistral_poe_10-full | terry69 | 2024-05-14T15:31:35Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T15:29:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/LLaMa-3-Base-Zeroed-13B-GGUF | mradermacher | 2024-05-14T15:30:55Z | 21 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:mergekit-community/LLaMa-3-Base-Zeroed-13B",
"base_model:quantized:mergekit-community/LLaMa-3-Base-Zeroed-13B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-14T14:39:16Z | ---
base_model: mergekit-community/LLaMa-3-Base-Zeroed-13B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/mergekit-community/LLaMa-3-Base-Zeroed-13B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LLaMa-3-Base-Zeroed-13B-GGUF/resolve/main/LLaMa-3-Base-Zeroed-13B.Q2_K.gguf) | Q2_K | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMa-3-Base-Zeroed-13B-GGUF/resolve/main/LLaMa-3-Base-Zeroed-13B.IQ3_XS.gguf) | IQ3_XS | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMa-3-Base-Zeroed-13B-GGUF/resolve/main/LLaMa-3-Base-Zeroed-13B.Q3_K_S.gguf) | Q3_K_S | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMa-3-Base-Zeroed-13B-GGUF/resolve/main/LLaMa-3-Base-Zeroed-13B.IQ3_S.gguf) | IQ3_S | 6.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/LLaMa-3-Base-Zeroed-13B-GGUF/resolve/main/LLaMa-3-Base-Zeroed-13B.IQ3_M.gguf) | IQ3_M | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMa-3-Base-Zeroed-13B-GGUF/resolve/main/LLaMa-3-Base-Zeroed-13B.Q3_K_M.gguf) | Q3_K_M | 6.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LLaMa-3-Base-Zeroed-13B-GGUF/resolve/main/LLaMa-3-Base-Zeroed-13B.Q3_K_L.gguf) | Q3_K_L | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMa-3-Base-Zeroed-13B-GGUF/resolve/main/LLaMa-3-Base-Zeroed-13B.IQ4_XS.gguf) | IQ4_XS | 7.3 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMa-3-Base-Zeroed-13B-GGUF/resolve/main/LLaMa-3-Base-Zeroed-13B.Q4_K_S.gguf) | Q4_K_S | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LLaMa-3-Base-Zeroed-13B-GGUF/resolve/main/LLaMa-3-Base-Zeroed-13B.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LLaMa-3-Base-Zeroed-13B-GGUF/resolve/main/LLaMa-3-Base-Zeroed-13B.Q5_K_S.gguf) | Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMa-3-Base-Zeroed-13B-GGUF/resolve/main/LLaMa-3-Base-Zeroed-13B.Q5_K_M.gguf) | Q5_K_M | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMa-3-Base-Zeroed-13B-GGUF/resolve/main/LLaMa-3-Base-Zeroed-13B.Q6_K.gguf) | Q6_K | 10.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/LLaMa-3-Base-Zeroed-13B-GGUF/resolve/main/LLaMa-3-Base-Zeroed-13B.Q8_0.gguf) | Q8_0 | 14.0 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
kyl23/hw3_SST2_lora_1e-4_r16 | kyl23 | 2024-05-14T15:29:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-14T15:29:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
terry69/mistral_poe_20-full | terry69 | 2024-05-14T15:27:54Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T15:25:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
terry69/mistral_poe_add-full | terry69 | 2024-05-14T15:23:42Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T15:10:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mjavadf/whisper-small-dv | mjavadf | 2024-05-14T15:21:12Z | 94 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dv",
"dataset:mozilla-foundation/common_voice_13_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-14T13:19:41Z | ---
language:
- dv
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Small Dv - Sanchit Gandhi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: dv
split: test
args: dv
metrics:
- name: Wer
type: wer
value: 13.60712174427096
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Dv - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1733
- Wer Ortho: 62.6715
- Wer: 13.6071
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:------:|:----:|:---------------:|:---------:|:-------:|
| 0.1198 | 1.6287 | 500 | 0.1733 | 62.6715 | 13.6071 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Antonio49/ModeloCanal | Antonio49 | 2024-05-14T15:20:20Z | 113 | 2 | transformers | [
"transformers",
"safetensors",
"bert",
"question-answering",
"es",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-04-07T08:20:05Z | ---
title: Antonio.BERT.Canal
emoji: 🐠
colorFrom: red
colorTo: blue
sdk: gradio
sdk_version: 3.33.1
app_file: app.py
pinned: false
license: mit
language:
- es
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed] |
terry69/mistral_poe_nores-full | terry69 | 2024-05-14T15:18:24Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T15:15:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
PantagrueLLM/jargon-general-biomed | PantagrueLLM | 2024-05-14T15:17:56Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"jargon",
"fill-mask",
"linformer",
"medical",
"RoBERTa",
"custom_code",
"fr",
"license:mit",
"autotrain_compatible",
"region:us"
] | fill-mask | 2024-05-13T17:49:04Z | ---
license: mit
language:
- fr
library_name: transformers
tags:
- linformer
- medical
- RoBERTa
- pytorch
---
# Jargon-general-biomed
[Jargon](https://hal.science/hal-04535557/file/FB2_domaines_specialises_LREC_COLING24.pdf) is an efficient transformer encoder LM for French, combining the LinFormer attention mechanism with the RoBERTa model architecture.
Jargon is available in several versions with different context sizes and types of pre-training corpora.
<!-- Provide a quick summary of what the model is/does. -->
<!-- This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
-->
| **Model** | **Initialised from...** |
|-------------------------------------------------------------------------------------|:-----------------------:|
| [jargon-general-base](https://huggingface.co/PantagrueLLM/jargon-general-base) | scratch |
| [jargon-general-biomed](https://huggingface.co/PantagrueLLM/jargon-general-biomed) | jargon-general-base |
| jargon-general-legal | jargon-general-base |
| [jargon-multidomain-base](https://huggingface.co/PantagrueLLM/jargon-multidomain-base) | jargon-general-base |
| jargon-legal | scratch |
| [jargon-legal-4096](https://huggingface.co/PantagrueLLM/jargon-legal-4096) | scratch |
| [jargon-biomed](https://huggingface.co/PantagrueLLM/jargon-biomed) | scratch |
| [jargon-biomed-4096](https://huggingface.co/PantagrueLLM/jargon-biomed-4096) | scratch |
| [jargon-NACHOS](https://huggingface.co/PantagrueLLM/jargon-NACHOS) | scratch |
| [jargon-NACHOS-4096](https://huggingface.co/PantagrueLLM/jargon-NACHOS-4096) | scratch |
## Evaluation
The Jargon models were evaluated on an range of specialized downstream tasks.
## Biomedical Benchmark
Results averaged across five funs with varying random seeds.
| |[**FrenchMedMCQA**](https://huggingface.co/datasets/qanastek/frenchmedmcqa)|[**MQC**](https://aclanthology.org/2020.lrec-1.72/)|[**CAS-POS**](https://clementdalloux.fr/?page_id=28)|[**ESSAI-POS**](https://clementdalloux.fr/?page_id=28)|[**CAS-SG**](https://aclanthology.org/W18-5614/)|[**MEDLINE**](https://huggingface.co/datasets/mnaguib/QuaeroFrenchMed)|[**EMEA**](https://huggingface.co/datasets/mnaguib/QuaeroFrenchMed)|[**E3C-NER**](https://live.european-language-grid.eu/catalogue/corpus/7618)|[**CLISTER**](https://aclanthology.org/2022.lrec-1.459/)|
|-------------------------|:-----------------------:|:-----------------------:|:--------------------:|:--------------------:|:--------------------:|:--------------------:|:--------------------:|:--------------------:|:--------------------:|
| **Task Type** | Sequence Classification | Sequence Classification | Token Classification | Token Classification | Token Classification | Token Classification | Token Classification | Token Classification | STS |
| **Metric** | EMR | Accuracy | Macro-F1 | Macro-F1 | Weighted F1 | Weighted F1 | Weighted F1 | Weighted F1 | Spearman Correlation |
| jargon-general-base | 12.9 | 76.7 | 96.6 | 96.0 | 69.4 | 81.7 | 96.5 | 91.9 | 78.0 |
| jargon-biomed | 15.3 | 91.1 | 96.5 | 95.6 | 75.1 | 83.7 | 96.5 | 93.5 | 74.6 |
| jargon-biomed-4096 | 14.4 | 78.9 | 96.6 | 95.9 | 73.3 | 82.3 | 96.3 | 92.5 | 65.3 |
| jargon-general-biomed | 16.1 | 69.7 | 95.1 | 95.1 | 67.8 | 78.2 | 96.6 | 91.3 | 59.7 |
| jargon-multidomain-base | 14.9 | 86.9 | 96.3 | 96.0 | 70.6 | 82.4 | 96.6 | 92.6 | 74.8 |
| jargon-NACHOS | 13.3 | 90.7 | 96.3 | 96.2 | 75.0 | 83.4 | 96.8 | 93.1 | 70.9 |
| jargon-NACHOS-4096 | 18.4 | 93.2 | 96.2 | 95.9 | 74.9 | 83.8 | 96.8 | 93.2 | 74.9 |
For more info please check out the [paper](https://hal.science/hal-04535557/file/FB2_domaines_specialises_LREC_COLING24.pdf), accepted for publication at [LREC-COLING 2024](https://lrec-coling-2024.org/list-of-accepted-papers/).
## Using Jargon models with HuggingFace transformers
You can get started with `jargon-general-biomed` using the code snippet below:
```python
from transformers import AutoModelForMaskedLM, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("PantagrueLLM/jargon-general-biomed", trust_remote_code=True)
model = AutoModelForMaskedLM.from_pretrained("PantagrueLLM/jargon-general-biomed", trust_remote_code=True)
jargon_maskfiller = pipeline("fill-mask", model=model, tokenizer=tokenizer)
output = jargon_maskfiller("Il est allé au <mask> hier")
```
You can also use the classes `AutoModel`, `AutoModelForSequenceClassification`, or `AutoModelForTokenClassification` to load Jargon models, depending on the downstream task in question.
- **Language(s):** French
- **License:** MIT
- **Developed by:** Vincent Segonne
- **Funded by**
- GENCI-IDRIS (Grant 2022 A0131013801)
- French National Research Agency: Pantagruel grant ANR-23-IAS1-0001
- MIAI@Grenoble Alpes ANR-19-P3IA-0003
- PROPICTO ANR-20-CE93-0005
- Lawbot ANR-20-CE38-0013
- Swiss National Science Foundation (grant PROPICTO N°197864)
- **Authors**
- Vincent Segonne
- Aidan Mannion
- Laura Cristina Alonzo Canul
- Alexandre Audibert
- Xingyu Liu
- Cécile Macaire
- Adrien Pupier
- Yongxin Zhou
- Mathilde Aguiar
- Felix Herron
- Magali Norré
- Massih-Reza Amini
- Pierrette Bouillon
- Iris Eshkol-Taravella
- Emmanuelle Esperança-Rodier
- Thomas François
- Lorraine Goeuriot
- Jérôme Goulian
- Mathieu Lafourcade
- Benjamin Lecouteux
- François Portet
- Fabien Ringeval
- Vincent Vandeghinste
- Maximin Coavoux
- Marco Dinarelli
- Didier Schwab
## Citation
If you use this model for your own research work, please cite as follows:
```bibtex
@inproceedings{segonne:hal-04535557,
TITLE = {{Jargon: A Suite of Language Models and Evaluation Tasks for French Specialized Domains}},
AUTHOR = {Segonne, Vincent and Mannion, Aidan and Alonzo Canul, Laura Cristina and Audibert, Alexandre and Liu, Xingyu and Macaire, C{\'e}cile and Pupier, Adrien and Zhou, Yongxin and Aguiar, Mathilde and Herron, Felix and Norr{\'e}, Magali and Amini, Massih-Reza and Bouillon, Pierrette and Eshkol-Taravella, Iris and Esperan{\c c}a-Rodier, Emmanuelle and Fran{\c c}ois, Thomas and Goeuriot, Lorraine and Goulian, J{\'e}r{\^o}me and Lafourcade, Mathieu and Lecouteux, Benjamin and Portet, Fran{\c c}ois and Ringeval, Fabien and Vandeghinste, Vincent and Coavoux, Maximin and Dinarelli, Marco and Schwab, Didier},
URL = {https://hal.science/hal-04535557},
BOOKTITLE = {{LREC-COLING 2024 - Joint International Conference on Computational Linguistics, Language Resources and Evaluation}},
ADDRESS = {Turin, Italy},
YEAR = {2024},
MONTH = May,
KEYWORDS = {Self-supervised learning ; Pretrained language models ; Evaluation benchmark ; Biomedical document processing ; Legal document processing ; Speech transcription},
PDF = {https://hal.science/hal-04535557/file/FB2_domaines_specialises_LREC_COLING24.pdf},
HAL_ID = {hal-04535557},
HAL_VERSION = {v1},
}
```
<!-- - **Finetuned from model [optional]:** [More Information Needed] -->
<!--
### Model Sources [optional]
<!-- Provide the basic links for the model. --> |
kishiyev/ppo-LunarLander-v2 | kishiyev | 2024-05-14T15:16:31Z | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-14T12:59:54Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 246.27 +/- 19.96
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
repo_id = "kishiyev/ppo-LunarLander-v2" # The repo_id
filename = "ppo-LunarLander-v2.zip" # The model filename.zip
custom_objects = {
"learning_rate": 0.0,
"lr_schedule": lambda _: 0.0,
"clip_range": lambda _: 0.0,
}
checkpoint = load_from_hub(repo_id, filename)
model = PPO.load(checkpoint, custom_objects=custom_objects, print_system_info=True)
...
```
|
emilykang/Phi_finetune_med | emilykang | 2024-05-14T15:16:15Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-05-14T09:30:46Z | ---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: microsoft/phi-2
datasets:
- generator
model-index:
- name: Phi_finetune_med
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Phi_finetune_med
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 10
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.1+cu118
- Datasets 2.19.0
- Tokenizers 0.19.1 |
AlkQ/ppo-SnowballTarget | AlkQ | 2024-05-14T15:13:57Z | 13 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2024-05-14T15:13:54Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: AlkQ/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
PantagrueLLM/jargon-NACHOS-4096 | PantagrueLLM | 2024-05-14T15:13:54Z | 101 | 0 | transformers | [
"transformers",
"pytorch",
"jargon",
"fill-mask",
"linformer",
"medical",
"RoBERTa",
"custom_code",
"fr",
"license:mit",
"autotrain_compatible",
"region:us"
] | fill-mask | 2024-05-13T18:49:23Z | ---
license: mit
language:
- fr
library_name: transformers
tags:
- linformer
- medical
- RoBERTa
- pytorch
---
# Jargon-NACHOS-4096
[Jargon](https://hal.science/hal-04535557/file/FB2_domaines_specialises_LREC_COLING24.pdf) is an efficient transformer encoder LM for French, combining the LinFormer attention mechanism with the RoBERTa model architecture.
Jargon is available in several versions with different context sizes and types of pre-training corpora.
<!-- Provide a quick summary of what the model is/does. -->
<!-- This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
-->
| **Model** | **Initialised from...** |**Training Data**|
|-------------------------------------------------------------------------------------|:-----------------------:|:----------------:|
| [jargon-general-base](https://huggingface.co/PantagrueLLM/jargon-general-base) | scratch |8.5GB Web Corpus|
| [jargon-general-biomed](https://huggingface.co/PantagrueLLM/jargon-general-biomed) | jargon-general-base |5.4GB Medical Corpus|
| jargon-general-legal | jargon-general-base |18GB Legal Corpus
| [jargon-multidomain-base](https://huggingface.co/PantagrueLLM/jargon-multidomain-base) | jargon-general-base |Medical+Legal Corpora|
| jargon-legal | scratch |18GB Legal Corpus|
| [jargon-legal-4096](https://huggingface.co/PantagrueLLM/jargon-legal-4096) | scratch |18GB Legal Corpus|
| [jargon-biomed](https://huggingface.co/PantagrueLLM/jargon-biomed) | scratch |5.4GB Medical Corpus|
| [jargon-biomed-4096](https://huggingface.co/PantagrueLLM/jargon-biomed-4096) | scratch |5.4GB Medical Corpus|
| [jargon-NACHOS](https://huggingface.co/PantagrueLLM/jargon-NACHOS) | scratch |[NACHOS](https://drbert.univ-avignon.fr/)|
| [jargon-NACHOS-4096](https://huggingface.co/PantagrueLLM/jargon-NACHOS-4096) | scratch |[NACHOS](https://drbert.univ-avignon.fr/)|
## Evaluation
The Jargon models were evaluated on an range of specialized downstream tasks.
## Biomedical Benchmark
Results averaged across five funs with varying random seeds.
| |[**FrenchMedMCQA**](https://huggingface.co/datasets/qanastek/frenchmedmcqa)|[**MQC**](https://aclanthology.org/2020.lrec-1.72/)|[**CAS-POS**](https://clementdalloux.fr/?page_id=28)|[**ESSAI-POS**](https://clementdalloux.fr/?page_id=28)|[**CAS-SG**](https://aclanthology.org/W18-5614/)|[**MEDLINE**](https://huggingface.co/datasets/mnaguib/QuaeroFrenchMed)|[**EMEA**](https://huggingface.co/datasets/mnaguib/QuaeroFrenchMed)|[**E3C-NER**](https://live.european-language-grid.eu/catalogue/corpus/7618)|[**CLISTER**](https://aclanthology.org/2022.lrec-1.459/)|
|-------------------------|:-----------------------:|:-----------------------:|:--------------------:|:--------------------:|:--------------------:|:--------------------:|:--------------------:|:--------------------:|:--------------------:|
| **Task Type** | Sequence Classification | Sequence Classification | Token Classification | Token Classification | Token Classification | Token Classification | Token Classification | Token Classification | STS |
| **Metric** | EMR | Accuracy | Macro-F1 | Macro-F1 | Weighted F1 | Weighted F1 | Weighted F1 | Weighted F1 | Spearman Correlation |
| jargon-general-base | 12.9 | 76.7 | 96.6 | 96.0 | 69.4 | 81.7 | 96.5 | 91.9 | 78.0 |
| jargon-biomed | 15.3 | 91.1 | 96.5 | 95.6 | 75.1 | 83.7 | 96.5 | 93.5 | 74.6 |
| jargon-biomed-4096 | 14.4 | 78.9 | 96.6 | 95.9 | 73.3 | 82.3 | 96.3 | 92.5 | 65.3 |
| jargon-general-biomed | 16.1 | 69.7 | 95.1 | 95.1 | 67.8 | 78.2 | 96.6 | 91.3 | 59.7 |
| jargon-multidomain-base | 14.9 | 86.9 | 96.3 | 96.0 | 70.6 | 82.4 | 96.6 | 92.6 | 74.8 |
| jargon-NACHOS | 13.3 | 90.7 | 96.3 | 96.2 | 75.0 | 83.4 | 96.8 | 93.1 | 70.9 |
| jargon-NACHOS-4096 | 18.4 | 93.2 | 96.2 | 95.9 | 74.9 | 83.8 | 96.8 | 93.2 | 74.9 |
For more info please check out the [paper](https://hal.science/hal-04535557/file/FB2_domaines_specialises_LREC_COLING24.pdf), accepted for publication at [LREC-COLING 2024](https://lrec-coling-2024.org/list-of-accepted-papers/).
## Using Jargon models with HuggingFace transformers
You can get started with `jargon-NACHOS-4096` using the code snippet below:
```python
from transformers import AutoModelForMaskedLM, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("PantagrueLLM/jargon-NACHOS-4096", trust_remote_code=True)
model = AutoModelForMaskedLM.from_pretrained("PantagrueLLM/jargon-NACHOS-4096", trust_remote_code=True)
jargon_maskfiller = pipeline("fill-mask", model=model, tokenizer=tokenizer)
output = jargon_maskfiller("Il est allé au <mask> hier")
```
You can also use the classes `AutoModel`, `AutoModelForSequenceClassification`, or `AutoModelForTokenClassification` to load Jargon models, depending on the downstream task in question.
- **Language(s):** French
- **License:** MIT
- **Developed by:** Vincent Segonne
- **Funded by**
- GENCI-IDRIS (Grant 2022 A0131013801)
- French National Research Agency: Pantagruel grant ANR-23-IAS1-0001
- MIAI@Grenoble Alpes ANR-19-P3IA-0003
- PROPICTO ANR-20-CE93-0005
- Lawbot ANR-20-CE38-0013
- Swiss National Science Foundation (grant PROPICTO N°197864)
- **Authors**
- Vincent Segonne
- Aidan Mannion
- Laura Cristina Alonzo Canul
- Alexandre Audibert
- Xingyu Liu
- Cécile Macaire
- Adrien Pupier
- Yongxin Zhou
- Mathilde Aguiar
- Felix Herron
- Magali Norré
- Massih-Reza Amini
- Pierrette Bouillon
- Iris Eshkol-Taravella
- Emmanuelle Esperança-Rodier
- Thomas François
- Lorraine Goeuriot
- Jérôme Goulian
- Mathieu Lafourcade
- Benjamin Lecouteux
- François Portet
- Fabien Ringeval
- Vincent Vandeghinste
- Maximin Coavoux
- Marco Dinarelli
- Didier Schwab
## Citation
If you use this model for your own research work, please cite as follows:
```bibtex
@inproceedings{segonne:hal-04535557,
TITLE = {{Jargon: A Suite of Language Models and Evaluation Tasks for French Specialized Domains}},
AUTHOR = {Segonne, Vincent and Mannion, Aidan and Alonzo Canul, Laura Cristina and Audibert, Alexandre and Liu, Xingyu and Macaire, C{\'e}cile and Pupier, Adrien and Zhou, Yongxin and Aguiar, Mathilde and Herron, Felix and Norr{\'e}, Magali and Amini, Massih-Reza and Bouillon, Pierrette and Eshkol-Taravella, Iris and Esperan{\c c}a-Rodier, Emmanuelle and Fran{\c c}ois, Thomas and Goeuriot, Lorraine and Goulian, J{\'e}r{\^o}me and Lafourcade, Mathieu and Lecouteux, Benjamin and Portet, Fran{\c c}ois and Ringeval, Fabien and Vandeghinste, Vincent and Coavoux, Maximin and Dinarelli, Marco and Schwab, Didier},
URL = {https://hal.science/hal-04535557},
BOOKTITLE = {{LREC-COLING 2024 - Joint International Conference on Computational Linguistics, Language Resources and Evaluation}},
ADDRESS = {Turin, Italy},
YEAR = {2024},
MONTH = May,
KEYWORDS = {Self-supervised learning ; Pretrained language models ; Evaluation benchmark ; Biomedical document processing ; Legal document processing ; Speech transcription},
PDF = {https://hal.science/hal-04535557/file/FB2_domaines_specialises_LREC_COLING24.pdf},
HAL_ID = {hal-04535557},
HAL_VERSION = {v1},
}
```
<!-- - **Finetuned from model [optional]:** [More Information Needed] -->
<!--
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
|
Sarxsarkos/ppo-Huggy | Sarxsarkos | 2024-05-14T15:13:28Z | 9 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2024-05-14T14:54:03Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Sarxsarkos/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
terry69/mistral_poe_small-full | terry69 | 2024-05-14T15:11:11Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T15:09:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kyl23/hw3_SST2_lora_1e-3 | kyl23 | 2024-05-14T15:11:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-14T15:10:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Joah-Remix-Llama-3-KoEn-8B-Reborn-GGUF | mradermacher | 2024-05-14T15:09:07Z | 119 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:asiansoul/Joah-Remix-Llama-3-KoEn-8B-Reborn",
"base_model:quantized:asiansoul/Joah-Remix-Llama-3-KoEn-8B-Reborn",
"endpoints_compatible",
"region:us"
] | null | 2024-05-14T14:39:15Z | ---
base_model: asiansoul/Joah-Remix-Llama-3-KoEn-8B-Reborn
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/asiansoul/Joah-Remix-Llama-3-KoEn-8B-Reborn
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Joah-Remix-Llama-3-KoEn-8B-Reborn-GGUF/resolve/main/Joah-Remix-Llama-3-KoEn-8B-Reborn.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Joah-Remix-Llama-3-KoEn-8B-Reborn-GGUF/resolve/main/Joah-Remix-Llama-3-KoEn-8B-Reborn.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Joah-Remix-Llama-3-KoEn-8B-Reborn-GGUF/resolve/main/Joah-Remix-Llama-3-KoEn-8B-Reborn.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Joah-Remix-Llama-3-KoEn-8B-Reborn-GGUF/resolve/main/Joah-Remix-Llama-3-KoEn-8B-Reborn.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Joah-Remix-Llama-3-KoEn-8B-Reborn-GGUF/resolve/main/Joah-Remix-Llama-3-KoEn-8B-Reborn.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Joah-Remix-Llama-3-KoEn-8B-Reborn-GGUF/resolve/main/Joah-Remix-Llama-3-KoEn-8B-Reborn.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Joah-Remix-Llama-3-KoEn-8B-Reborn-GGUF/resolve/main/Joah-Remix-Llama-3-KoEn-8B-Reborn.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Joah-Remix-Llama-3-KoEn-8B-Reborn-GGUF/resolve/main/Joah-Remix-Llama-3-KoEn-8B-Reborn.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Joah-Remix-Llama-3-KoEn-8B-Reborn-GGUF/resolve/main/Joah-Remix-Llama-3-KoEn-8B-Reborn.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Joah-Remix-Llama-3-KoEn-8B-Reborn-GGUF/resolve/main/Joah-Remix-Llama-3-KoEn-8B-Reborn.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Joah-Remix-Llama-3-KoEn-8B-Reborn-GGUF/resolve/main/Joah-Remix-Llama-3-KoEn-8B-Reborn.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Joah-Remix-Llama-3-KoEn-8B-Reborn-GGUF/resolve/main/Joah-Remix-Llama-3-KoEn-8B-Reborn.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Joah-Remix-Llama-3-KoEn-8B-Reborn-GGUF/resolve/main/Joah-Remix-Llama-3-KoEn-8B-Reborn.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Joah-Remix-Llama-3-KoEn-8B-Reborn-GGUF/resolve/main/Joah-Remix-Llama-3-KoEn-8B-Reborn.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Joah-Remix-Llama-3-KoEn-8B-Reborn-GGUF/resolve/main/Joah-Remix-Llama-3-KoEn-8B-Reborn.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
GenTrendGPT/OS-Test-Mark-GEN-IA | GenTrendGPT | 2024-05-14T15:07:00Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Nexusflow/Starling-LM-7B-beta",
"base_model:merge:Nexusflow/Starling-LM-7B-beta",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:merge:meta-llama/Meta-Llama-3-8B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T15:02:04Z | ---
base_model:
- Nexusflow/Starling-LM-7B-beta
- meta-llama/Meta-Llama-3-8B-Instruct
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [Nexusflow/Starling-LM-7B-beta](https://huggingface.co/Nexusflow/Starling-LM-7B-beta)
* [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: passthrough
dtype: bfloat16
slices:
- sources:
- model: meta-llama/Meta-Llama-3-8B-Instruct
layer_range: [0, 32]
- sources:
- model: Nexusflow/Starling-LM-7B-beta
layer_range: [0, 32]
merge_method: passthrough
```
|
NikolayKozloff/phi-3-portuguese-tom-cat-4k-instruct-Q8_0-GGUF | NikolayKozloff | 2024-05-14T15:06:48Z | 5 | 1 | transformers | [
"transformers",
"gguf",
"portugues",
"portuguese",
"QA",
"instruct",
"phi",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"pt",
"dataset:rhaymison/superset",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:quantized:microsoft/Phi-3-mini-4k-instruct",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-05-14T15:06:37Z | ---
language:
- pt
license: apache-2.0
library_name: transformers
tags:
- portugues
- portuguese
- QA
- instruct
- phi
- llama-cpp
- gguf-my-repo
base_model: microsoft/Phi-3-mini-4k-instruct
datasets:
- rhaymison/superset
pipeline_tag: text-generation
model-index:
- name: phi-3-portuguese-tom-cat-4k-instruct
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: ENEM Challenge (No Images)
type: eduagarcia/enem_challenge
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 61.58
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/phi-3-portuguese-tom-cat-4k-instruct
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BLUEX (No Images)
type: eduagarcia-temp/BLUEX_without_images
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 50.63
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/phi-3-portuguese-tom-cat-4k-instruct
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: OAB Exams
type: eduagarcia/oab_exams
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 43.69
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/phi-3-portuguese-tom-cat-4k-instruct
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 RTE
type: assin2
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 91.54
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/phi-3-portuguese-tom-cat-4k-instruct
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 STS
type: eduagarcia/portuguese_benchmark
split: test
args:
num_few_shot: 15
metrics:
- type: pearson
value: 75.27
name: pearson
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/phi-3-portuguese-tom-cat-4k-instruct
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: FaQuAD NLI
type: ruanchaves/faquad-nli
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 47.46
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/phi-3-portuguese-tom-cat-4k-instruct
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HateBR Binary
type: ruanchaves/hatebr
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 83.01
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/phi-3-portuguese-tom-cat-4k-instruct
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: PT Hate Speech Binary
type: hate_speech_portuguese
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 70.19
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/phi-3-portuguese-tom-cat-4k-instruct
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: tweetSentBR
type: eduagarcia/tweetsentbr_fewshot
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 57.78
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/phi-3-portuguese-tom-cat-4k-instruct
name: Open Portuguese LLM Leaderboard
---
# NikolayKozloff/phi-3-portuguese-tom-cat-4k-instruct-Q8_0-GGUF
This model was converted to GGUF format from [`rhaymison/phi-3-portuguese-tom-cat-4k-instruct`](https://huggingface.co/rhaymison/phi-3-portuguese-tom-cat-4k-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/rhaymison/phi-3-portuguese-tom-cat-4k-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo NikolayKozloff/phi-3-portuguese-tom-cat-4k-instruct-Q8_0-GGUF --model phi-3-portuguese-tom-cat-4k-instruct.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo NikolayKozloff/phi-3-portuguese-tom-cat-4k-instruct-Q8_0-GGUF --model phi-3-portuguese-tom-cat-4k-instruct.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m phi-3-portuguese-tom-cat-4k-instruct.Q8_0.gguf -n 128
```
|
Gigax/NPC-LLM-7B-GGUF | Gigax | 2024-05-14T15:06:18Z | 6 | 4 | null | [
"gguf",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-13T10:54:07Z | ---
license: apache-2.0
language:
- en
---
# NPC Model
This repo contains the domain-specific NPC model we've fined-tuned from **Mistral-7B**, using LoRA.
This model parses a text description of a game scene, and outputs commands like:
* `say <player1> "Hello Adventurer, care to join me on a quest?`
* `greet <player1>`
* `attack <player1>`
* Any other `<action> <param>` you add to the prompt! (We call these "skills"!)
⚠️ This model has been trained to **overfit** on our input prompt format. Follow it closely to reach optimal performance ⚠️
## Usage
**Make your life easier, use our [Python client library](https://github.com/GigaxGames/gigax)**
* Instantiating the model using outlines:
```py
from outlines import models
from llama_cpp import Llama
# Download model from the Hugging Face Gigax Hub before run this code
# Our stepper takes in a Outlines model to enable guided generation
# This forces the model to follow our output format
llm = Llama.from_pretrained(
repo_id="Gigax/NPC-LLM-7B-GGUF",
filename="npc-llm-7B.gguf"
# n_gpu_layers=-1, # Uncomment to use GPU acceleration
# seed=1337, # Uncomment to set a specific seed
# n_ctx=2048, # Uncomment to increase the context window
)
model = models.LlamaCpp(llm)
# Instantiate a stepper: handles prompting + output parsing
stepper = NPCStepper(model=model)
```
* Calling the model on your game's data:
```py
from gigax.parse import CharacterAction
from gigax.scene import (
Character,
Item,
Location,
ProtagonistCharacter,
ProtagonistCharacter,
Skill,
ParameterType,
)
# Use sample data
context = "Medieval world"
current_location = Location(name="Old Town", description="A quiet and peaceful town.")
locations = [current_location] # you can add more locations to the scene
NPCs = [
Character(
name="John the Brave",
description="A fearless warrior",
current_location=current_location,
)
]
protagonist = ProtagonistCharacter(
name="Aldren",
description="Brave and curious",
current_location=current_location,
memories=["Saved the village", "Lost a friend"],
quests=["Find the ancient artifact", "Defeat the evil warlock"],
skills=[
Skill(
name="Attack",
description="Deliver a powerful blow",
parameter_types=[ParameterType.character],
)
],
psychological_profile="Determined and compassionate",
)
items = [Item(name="Sword", description="A sharp blade")]
events = [
CharacterAction(
command="Say",
protagonist=protagonist,
parameters=[items[0], "What a fine sword!"],
)
]
action = stepper.get_action(
context=context,
locations=locations,
NPCs=NPCs,
protagonist=protagonist,
items=items,
events=events,
)
```
## Input prompt
Here's a sample input prompt, showing you the format on which the model has been trained:
```txt
- WORLD KNOWLEDGE: A vast open world full of mystery and adventure.
- KNOWN LOCATIONS: Old Town
- NPCS: John the Brave
- CURRENT LOCATION: Old Town: A quiet and peaceful town.
- CURRENT LOCATION ITEMS: Sword
- LAST EVENTS:
Aldren: Say Sword What a fine sword!
- PROTAGONIST NAME: Aldren
- PROTAGONIST PSYCHOLOGICAL PROFILE: Brave and curious
- PROTAGONIST MEMORIES:
Saved the village
Lost a friend
- PROTAGONIST PENDING QUESTS:
Find the ancient artifact
Defeat the evil warlock
- PROTAGONIST ALLOWED ACTIONS:
Attack <character> : Deliver a powerful blow
Aldren:
```
### 🤗 We are currently working hard on training on the latest SoTA models (Phi-3, LLama, etc.), and on better data ! 🤗
## Model info
- **Developed by:** Gigax
- **Language(s) (NLP):** English
- **Finetuned from model [optional]:** [Mistral-7B-instruct](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
- **Contact:** Join our [Discord](https://discord.gg/xES2Z8X4J6) for info, help, and more!
## How to Cite
```bibtex
@misc{NPC-LLM-7B-GGUF,
url={[https://huggingface.co/Gigax/NPC-LLM-7B-GGUF](https://huggingface.co/Gigax/NPC-LLM-7B-GGUF)},
title={NPC-LLM-7B-GGUF},
author={Gigax team}
}
```
|
Gigax/NPC-LLM-7B | Gigax | 2024-05-14T15:05:43Z | 79 | 11 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-26T15:34:34Z | ---
license: apache-2.0
language:
- en
---
# NPC Model
This repo contains the domain-specific NPC model we've fined-tuned from **Mistral-7B**, using LoRA.
This model parses a text description of a game scene, and outputs commands like:
* `say <player1> "Hello Adventurer, care to join me on a quest?`
* `greet <player1>`
* `attack <player1>`
* Any other `<action> <param>` you add to the prompt! (We call these "skills"!)
⚠️ This model has been trained to **overfit** on our input prompt format. Follow it closely to reach optimal performance ⚠️
## Usage
**Make your life easier, use our [Python client library](https://github.com/GigaxGames/gigax)**
* Instantiating the model using outlines:
```py
from outlines import models
from gigax.step import NPCStepper
# Download model from the Hub
model_name = "Gigax/NPC-LLM-7B"
llm = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Our stepper takes in a Outlines model to enable guided generation
# This forces the model to follow our output format
model = models.Transformers(llm, tokenizer)
# Instantiate a stepper: handles prompting + output parsing
stepper = NPCStepper(model=model)
```
* Calling the model on your game's data:
```py
from gigax.parse import CharacterAction
from gigax.scene import (
Character,
Item,
Location,
ProtagonistCharacter,
ProtagonistCharacter,
Skill,
ParameterType,
)
# Use sample data
context = "Medieval world"
current_location = Location(name="Old Town", description="A quiet and peaceful town.")
locations = [current_location] # you can add more locations to the scene
NPCs = [
Character(
name="John the Brave",
description="A fearless warrior",
current_location=current_location,
)
]
protagonist = ProtagonistCharacter(
name="Aldren",
description="Brave and curious",
current_location=current_location,
memories=["Saved the village", "Lost a friend"],
quests=["Find the ancient artifact", "Defeat the evil warlock"],
skills=[
Skill(
name="Attack",
description="Deliver a powerful blow",
parameter_types=[ParameterType.character],
)
],
psychological_profile="Determined and compassionate",
)
items = [Item(name="Sword", description="A sharp blade")]
events = [
CharacterAction(
command="Say",
protagonist=protagonist,
parameters=[items[0], "What a fine sword!"],
)
]
action = stepper.get_action(
context=context,
locations=locations,
NPCs=NPCs,
protagonist=protagonist,
items=items,
events=events,
)
```
## Input prompt
Here's a sample input prompt, showing you the format on which the model has been trained:
```txt
- WORLD KNOWLEDGE: A vast open world full of mystery and adventure.
- KNOWN LOCATIONS: Old Town
- NPCS: John the Brave
- CURRENT LOCATION: Old Town: A quiet and peaceful town.
- CURRENT LOCATION ITEMS: Sword
- LAST EVENTS:
Aldren: Say Sword What a fine sword!
- PROTAGONIST NAME: Aldren
- PROTAGONIST PSYCHOLOGICAL PROFILE: Brave and curious
- PROTAGONIST MEMORIES:
Saved the village
Lost a friend
- PROTAGONIST PENDING QUESTS:
Find the ancient artifact
Defeat the evil warlock
- PROTAGONIST ALLOWED ACTIONS:
Attack <character> : Deliver a powerful blow
Aldren:
```
### 🤗 We are currently working hard on training on the latest SoTA models (Phi-3, LLama, etc.), and on better data ! 🤗
## Model info
- **Developed by:** Gigax
- **Language(s) (NLP):** English
- **Finetuned from model [optional]:** [Mistral-7B-instruct](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
- **Contact:** Join our [Discord](https://discord.gg/xES2Z8X4J6) for info, help, and more!
## How to Cite
```bibtex
@misc{NPC-LLM-7B,
url={[https://huggingface.co/Gigax/NPC-LLM-7B](https://huggingface.co/Gigax/NPC-LLM-7B)},
title={NPC-LLM-7B},
author={Gigax team}
}
``` |
Gigax/NPC-LLM-3_8B-GGUF | Gigax | 2024-05-14T15:05:13Z | 32 | 1 | null | [
"gguf",
"en",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-14T11:37:15Z | ---
license: mit
language:
- en
---
# NPC Model
This repo contains the domain-specific NPC model we've fined-tuned from **Phi-3**, using LoRA.
This model parses a text description of a game scene, and outputs commands like:
* `say <player1> "Hello Adventurer, care to join me on a quest?`
* `greet <player1>`
* `attack <player1>`
* Any other `<action> <param>` you add to the prompt! (We call these "skills"!)
⚠️ This model has been trained to **overfit** on our input prompt format. Follow it closely to reach optimal performance ⚠️
## Usage
**Make your life easier, use our [Python client library](https://github.com/GigaxGames/gigax)**
* Instantiating the model using outlines:
```py
from outlines import models
from gigax.step import NPCStepper
from llama_cpp import Llama
# Download model from the Hugging Face Gigax Hub before run this code
# Our stepper takes in a Outlines model to enable guided generation
# This forces the model to follow our output format
llm = Llama.from_pretrained(
repo_id="Gigax/NPC-LLM-3_8B-GGUF",
filename="npc-llm-3_8B.gguf"
# n_gpu_layers=-1, # Uncomment to use GPU acceleration
# seed=1337, # Uncomment to set a specific seed
# n_ctx=2048, # Uncomment to increase the context window
)
model = models.LlamaCpp(llm)
# Instantiate a stepper: handles prompting + output parsing
stepper = NPCStepper(model=model)
```
* Calling the model on your game's data:
```py
from gigax.parse import CharacterAction
from gigax.scene import (
Character,
Item,
Location,
ProtagonistCharacter,
ProtagonistCharacter,
Skill,
ParameterType,
)
# Use sample data
context = "Medieval world"
current_location = Location(name="Old Town", description="A quiet and peaceful town.")
locations = [current_location] # you can add more locations to the scene
NPCs = [
Character(
name="John the Brave",
description="A fearless warrior",
current_location=current_location,
)
]
protagonist = ProtagonistCharacter(
name="Aldren",
description="Brave and curious",
current_location=current_location,
memories=["Saved the village", "Lost a friend"],
quests=["Find the ancient artifact", "Defeat the evil warlock"],
skills=[
Skill(
name="Attack",
description="Deliver a powerful blow",
parameter_types=[ParameterType.character],
)
],
psychological_profile="Determined and compassionate",
)
items = [Item(name="Sword", description="A sharp blade")]
events = [
CharacterAction(
command="Say",
protagonist=protagonist,
parameters=[items[0], "What a fine sword!"],
)
]
action = stepper.get_action(
context=context,
locations=locations,
NPCs=NPCs,
protagonist=protagonist,
items=items,
events=events,
)
```
## Input prompt
Here's a sample input prompt, showing you the format on which the model has been trained:
```txt
- WORLD KNOWLEDGE: A vast open world full of mystery and adventure.
- KNOWN LOCATIONS: Old Town
- NPCS: John the Brave
- CURRENT LOCATION: Old Town: A quiet and peaceful town.
- CURRENT LOCATION ITEMS: Sword
- LAST EVENTS:
Aldren: Say Sword What a fine sword!
- PROTAGONIST NAME: Aldren
- PROTAGONIST PSYCHOLOGICAL PROFILE: Brave and curious
- PROTAGONIST MEMORIES:
Saved the village
Lost a friend
- PROTAGONIST PENDING QUESTS:
Find the ancient artifact
Defeat the evil warlock
- PROTAGONIST ALLOWED ACTIONS:
Attack <character> : Deliver a powerful blow
Aldren:
```
### 🤗 We are currently working hard on training on the latest SoTA models (Phi-3, LLama, etc.), and on better data ! 🤗
## Model info
- **Developed by:** Gigax
- **Language(s) (NLP):** English
- **Finetuned from model [optional]:** [Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct)
- **Contact:** Join our [Discord](https://discord.gg/xES2Z8X4J6) for info, help, and more!
## How to Cite
```bibtex
@misc{NPC-LLM-3_8B-GGUF,
url={[https://huggingface.co/Gigax/NPC-LLM-3_8B-GGUF-](https://huggingface.co/Gigax/NPC-LLM-3_8B-GGUF)},
title={NPC-LLM-3_8B-GGUF},
author={Gigax team}
}
``` |
Gigax/NPC-LLM-3_8B-128k-GGUF | Gigax | 2024-05-14T15:04:28Z | 4 | 2 | null | [
"gguf",
"en",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-14T11:55:47Z | ---
license: mit
language:
- en
---
# NPC Model
This repo contains the domain-specific NPC model we've fined-tuned from **Phi-3-128k**, using LoRA.
This model parses a text description of a game scene, and outputs commands like:
* `say <player1> "Hello Adventurer, care to join me on a quest?`
* `greet <player1>`
* `attack <player1>`
* Any other `<action> <param>` you add to the prompt! (We call these "skills"!)
⚠️ This model has been trained to **overfit** on our input prompt format. Follow it closely to reach optimal performance ⚠️
## Usage
**Make your life easier, use our [Python client library](https://github.com/GigaxGames/gigax)**
* Instantiating the model using outlines:
```py
from outlines import models
from gigax.step import NPCStepper
from llama_cpp import Llama
# Download model from the Hugging Face Gigax Hub before run this code
# Our stepper takes in a Outlines model to enable guided generation
# This forces the model to follow our output format
llm = Llama.from_pretrained(
repo_id="Gigax/NPC-LLM-3_8B-128k-GGUF",
filename="npc-llm-3_8B-128k.gguf"
# n_gpu_layers=-1, # Uncomment to use GPU acceleration
# seed=1337, # Uncomment to set a specific seed
# n_ctx=2048, # Uncomment to increase the context window
)
model = models.LlamaCpp(llm)
# Instantiate a stepper: handles prompting + output parsing
stepper = NPCStepper(model=model)
```
* Calling the model on your game's data:
```py
from gigax.parse import CharacterAction
from gigax.scene import (
Character,
Item,
Location,
ProtagonistCharacter,
ProtagonistCharacter,
Skill,
ParameterType,
)
# Use sample data
context = "Medieval world"
current_location = Location(name="Old Town", description="A quiet and peaceful town.")
locations = [current_location] # you can add more locations to the scene
NPCs = [
Character(
name="John the Brave",
description="A fearless warrior",
current_location=current_location,
)
]
protagonist = ProtagonistCharacter(
name="Aldren",
description="Brave and curious",
current_location=current_location,
memories=["Saved the village", "Lost a friend"],
quests=["Find the ancient artifact", "Defeat the evil warlock"],
skills=[
Skill(
name="Attack",
description="Deliver a powerful blow",
parameter_types=[ParameterType.character],
)
],
psychological_profile="Determined and compassionate",
)
items = [Item(name="Sword", description="A sharp blade")]
events = [
CharacterAction(
command="Say",
protagonist=protagonist,
parameters=[items[0], "What a fine sword!"],
)
]
action = stepper.get_action(
context=context,
locations=locations,
NPCs=NPCs,
protagonist=protagonist,
items=items,
events=events,
)
```
## Input prompt
Here's a sample input prompt, showing you the format on which the model has been trained:
```txt
- WORLD KNOWLEDGE: A vast open world full of mystery and adventure.
- KNOWN LOCATIONS: Old Town
- NPCS: John the Brave
- CURRENT LOCATION: Old Town: A quiet and peaceful town.
- CURRENT LOCATION ITEMS: Sword
- LAST EVENTS:
Aldren: Say Sword What a fine sword!
- PROTAGONIST NAME: Aldren
- PROTAGONIST PSYCHOLOGICAL PROFILE: Brave and curious
- PROTAGONIST MEMORIES:
Saved the village
Lost a friend
- PROTAGONIST PENDING QUESTS:
Find the ancient artifact
Defeat the evil warlock
- PROTAGONIST ALLOWED ACTIONS:
Attack <character> : Deliver a powerful blow
Aldren:
```
### 🤗 We are currently working hard on training on the latest SoTA models (Phi-3, LLama, etc.), and on better data ! 🤗
## Model info
- **Developed by:** Gigax
- **Language(s) (NLP):** English
- **Finetuned from model [optional]:** [Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct)
- **Contact:** Join our [Discord](https://discord.gg/xES2Z8X4J6) for info, help, and more!
## How to Cite
```bibtex
@misc{NPC-LLM-3_8B-128k-GGUF,
url={[https://huggingface.co/Gigax/NPC-LLM-3_8B-128k-GGUF](https://huggingface.co/Gigax/NPC-LLM-3_8B-128k-GGUF)},
title={NPC-LLM-3_8B-128k-GGUF},
author={Gigax team}
}
``` |
NikolayKozloff/Llama-3-portuguese-Tom-cat-8b-instruct-Q6_K-GGUF | NikolayKozloff | 2024-05-14T15:03:48Z | 6 | 1 | transformers | [
"transformers",
"gguf",
"portugues",
"portuguese",
"QA",
"instruct",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"pt",
"dataset:rhaymison/superset",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:quantized:meta-llama/Meta-Llama-3-8B-Instruct",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-05-14T15:03:30Z | ---
language:
- pt
license: apache-2.0
library_name: transformers
tags:
- portugues
- portuguese
- QA
- instruct
- llama-cpp
- gguf-my-repo
base_model: meta-llama/Meta-Llama-3-8B-Instruct
datasets:
- rhaymison/superset
pipeline_tag: text-generation
model-index:
- name: Llama-3-portuguese-Tom-cat-8b-instruct
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: ENEM Challenge (No Images)
type: eduagarcia/enem_challenge
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 70.4
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BLUEX (No Images)
type: eduagarcia-temp/BLUEX_without_images
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 58.0
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: OAB Exams
type: eduagarcia/oab_exams
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 51.07
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 RTE
type: assin2
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 90.91
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 STS
type: eduagarcia/portuguese_benchmark
split: test
args:
num_few_shot: 15
metrics:
- type: pearson
value: 75.4
name: pearson
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: FaQuAD NLI
type: ruanchaves/faquad-nli
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 76.05
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HateBR Binary
type: ruanchaves/hatebr
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 86.99
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: PT Hate Speech Binary
type: hate_speech_portuguese
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 60.39
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: tweetSentBR
type: eduagarcia/tweetsentbr_fewshot
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 65.92
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct
name: Open Portuguese LLM Leaderboard
---
# NikolayKozloff/Llama-3-portuguese-Tom-cat-8b-instruct-Q6_K-GGUF
This model was converted to GGUF format from [`rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct`](https://huggingface.co/rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo NikolayKozloff/Llama-3-portuguese-Tom-cat-8b-instruct-Q6_K-GGUF --model llama-3-portuguese-tom-cat-8b-instruct.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo NikolayKozloff/Llama-3-portuguese-Tom-cat-8b-instruct-Q6_K-GGUF --model llama-3-portuguese-tom-cat-8b-instruct.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-3-portuguese-tom-cat-8b-instruct.Q6_K.gguf -n 128
```
|
Gigax/NPC-LLM-3_8B-128k | Gigax | 2024-05-14T15:03:43Z | 152 | 5 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-02T11:20:47Z | ---
license: mit
language:
- en
---
# NPC Model
This repo contains the domain-specific NPC model we've fined-tuned from **Phi-3-128k**, using LoRA.
This model parses a text description of a game scene, and outputs commands like:
* `say <player1> "Hello Adventurer, care to join me on a quest?`
* `greet <player1>`
* `attack <player1>`
* Any other `<action> <param>` you add to the prompt! (We call these "skills"!)
⚠️ This model has been trained to **overfit** on our input prompt format. Follow it closely to reach optimal performance ⚠️
## Usage
**Make your life easier, use our [Python client library](https://github.com/GigaxGames/gigax)**
* Instantiating the model using outlines:
```py
from outlines import models
from gigax.step import NPCStepper
# Download model from the Hub
model_name = "Gigax/NPC-LLM-3_8B-128k"
llm = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Our stepper takes in a Outlines model to enable guided generation
# This forces the model to follow our output format
model = models.Transformers(llm, tokenizer)
# Instantiate a stepper: handles prompting + output parsing
stepper = NPCStepper(model=model)
```
* Calling the model on your game's data:
```py
from gigax.parse import CharacterAction
from gigax.scene import (
Character,
Item,
Location,
ProtagonistCharacter,
ProtagonistCharacter,
Skill,
ParameterType,
)
# Use sample data
context = "Medieval world"
current_location = Location(name="Old Town", description="A quiet and peaceful town.")
locations = [current_location] # you can add more locations to the scene
NPCs = [
Character(
name="John the Brave",
description="A fearless warrior",
current_location=current_location,
)
]
protagonist = ProtagonistCharacter(
name="Aldren",
description="Brave and curious",
current_location=current_location,
memories=["Saved the village", "Lost a friend"],
quests=["Find the ancient artifact", "Defeat the evil warlock"],
skills=[
Skill(
name="Attack",
description="Deliver a powerful blow",
parameter_types=[ParameterType.character],
)
],
psychological_profile="Determined and compassionate",
)
items = [Item(name="Sword", description="A sharp blade")]
events = [
CharacterAction(
command="Say",
protagonist=protagonist,
parameters=[items[0], "What a fine sword!"],
)
]
action = stepper.get_action(
context=context,
locations=locations,
NPCs=NPCs,
protagonist=protagonist,
items=items,
events=events,
)
```
## Input prompt
Here's a sample input prompt, showing you the format on which the model has been trained:
```txt
- WORLD KNOWLEDGE: A vast open world full of mystery and adventure.
- KNOWN LOCATIONS: Old Town
- NPCS: John the Brave
- CURRENT LOCATION: Old Town: A quiet and peaceful town.
- CURRENT LOCATION ITEMS: Sword
- LAST EVENTS:
Aldren: Say Sword What a fine sword!
- PROTAGONIST NAME: Aldren
- PROTAGONIST PSYCHOLOGICAL PROFILE: Brave and curious
- PROTAGONIST MEMORIES:
Saved the village
Lost a friend
- PROTAGONIST PENDING QUESTS:
Find the ancient artifact
Defeat the evil warlock
- PROTAGONIST ALLOWED ACTIONS:
Attack <character> : Deliver a powerful blow
Aldren:
```
### 🤗 We are currently working hard on training on the latest SoTA models (Phi-3, LLama, etc.), and on better data ! 🤗
## Model info
- **Developed by:** Gigax
- **Language(s) (NLP):** English
- **Finetuned from model [optional]:** [Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct)
- **Contact:** Join our [Discord](https://discord.gg/xES2Z8X4J6) for info, help, and more!
## How to Cite
```bibtex
@misc{NPC-LLM-3_8B-128k,
url={[https://huggingface.co/Gigax/NPC-LLM-3_8B-128k](https://huggingface.co/Gigax/NPC-LLM-3_8B-128k)},
title={NPC-LLM-3_8B-128k},
author={Gigax team}
}
```
|
SKLxAiforia/FriendV4.1 | SKLxAiforia | 2024-05-14T15:02:11Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T12:54:14Z | ---
library_name: transformers
tags: []
---
# Small intro about model
В базовом промпте модели есть 5 основных блоков, разделенных символом `\n`.
```
Friend name: {friend_name}\n
Friend description: {friend_description}\n
Friend intention_of_friend: {intention_of_friend}\n
Person name: {person_name}\n
Person description: {person_description}\n
{dialogue}
```
Диалог представляет собой последовательность реплик в следующем порядке, разделенных `\n`:
```
{user_name}: {user_reply_1}\n
{bot_name}: {bot_reply_1}\n
{user_name}: {user_reply_2}\n
...
```
Контекст - 20 сообщений. Возможно модель сможет общаться и при бОльшем кол-ве сообщений, но такой функционал не тестился.
Не рекомендуется не заполнять какие-то блоки в базовом промпте, если информации нет - можно написать что-то совсем общее и базовое.
# Example of usage
```python
import requests
import json
URL = "http://35.209.126.102:7727/generate" #"https://3a31-34-170-161-27.ngrok-free.app/generate"
MAX_CONTEXT_LENGTH = 20
BOT_PROMPT = "Jamie"
USER_PROMPT = "Blake"
NARRATIVE = "\n".join([
"Friend name: Jamie",
"Friend description: Jamie is an ever-curious soul with a penchant for photography and volunteering at animal shelters. They were born in Melbourne and find joy in spontaneous road trips and outdoor adventures. Jamie, at 26 years old, carries an air of comforting assurance with an eclectic taste in indie music."
"Friend intention_of_friend: Jamie's intention is to provide a safe space for Person to share their feelings. By engaging in meaningful dialogue, Jamie seeks to help Person recognize their own strengths and feel less isolated.",
"Person name: Blake",
"Person description: Blake, a reserved 23-year-old software engineer from Toronto, has a particular fondness for classic literature and chess. They appear indifferent on the surface but beneath lies a depth shaped by a recent breakup and a demanding career.",
])
SEPARATOR = "\n"
def generate(
prompt: str,
url: str = URL,
) -> str:
req_data = json.dumps({
"inputs": prompt,
"parameters": {
"max_new_tokens": 30,
"stop": ["\n", " \n", ".\n", "?\n"],
"top_p": 0.9,
"temperature": 0.95,
"top_k": 50,
"do_sample": True,
}
})
headers = {
'Content-Type': 'application/json'
}
response = requests.post(url=url, data=req_data, headers=headers).json()
response_text = response["generated_text"].strip()
if '\n' in response_text:
response_text = response_text.split('\n')[0]
return response_text
def make_prompt(context: list[str]) -> str:
return SEPARATOR.join(
[NARRATIVE] + context[-MAX_CONTEXT_LENGTH:] + [f"{BOT_PROMPT}:"]
)
if __name__ == "__main__":
messages = []
while True:
user_phrase = input("You: ")
messages.append(f"{USER_PROMPT}: {user_phrase}")
model_prompt = make_prompt(context=messages)
generated_response = generate(model_prompt)
bot_phrase = f"{BOT_PROMPT}: {generated_response}"
messages.append(bot_phrase)
print(bot_phrase)
```
# Prompt Examples
```Friend name: Sam\nFriend description: Sam is a life coach and yoga enthusiast from San Francisco, aged 29, who thrives in assisting others to find their path. They have a past filled with overcoming personal obstacles, which they openly share to inspire resilience in others. They love experimenting with vegan recipes.\nFriend intention_of_friend: Sam intends to help Person build self-esteem and introduce healthy routines into their life. Through their conversation, Sam plans to motivate Person to practice self-care and mindfulness.\nPerson name: Cameron\nPerson description: Cameron, age 31, is a jaded musician living in New Orleans. Once hopeful and lively, recent setbacks in their career have led to disillusionment. Known for a sharp wit, they nonetheless retain a deep love for live jazz and rainy afternoons.\n```
```Friend name: Taylor\nFriend description: Taylor, a charismatic event planner from New Orleans, 27, often feels energized by the dynamic bustle of city life. Their genuine care for others shines through in their active volunteer work. Taylor's personal journey includes a powerful narrative of self-discovery after college.\nFriend intention_of_friend: Taylor's intention is to uplift Person by getting them involved in local community events to foster a sense of belonging and purpose, something Taylor believes in strongly.\nPerson name: Harper\nPerson description: Harper, a 22-year-old aspiring writer from Dublin, harbors a zest for historical novels and boxing. Though typically cold and standoffish, they dream of authentic connections and a break from the monotony of their daily routine.\n```
```Friend name: Riley\nFriend description: Riley, a world-wise traveler, 34, hails from a small coastal town in Iceland. They are a documentary filmmaker with an impressive collection of folk music records. Despite their accomplished life, they remain grounded and relatable, always seeking new friendships.\nFriend intention_of_friend: Riley wants to help Person discover the enriching experience of embracing different cultures. By sharing travel stories, Riley aims to spark an interest in Person to see the world from a fresh perspective.\nPerson name: Jordan\nPerson description: Jordan, an introverted postgrad student in philosophy from New York City, values solitude and reflection. At 25, they frequently grapple with existential questions, which can overshadow daily joys. Their analytical mind enjoys puzzles, but Jordan often struggles to connect with others.\n```
```Friend name: Alex\nFriend description: Alex, age 28, is a free spirit originally from Portland, operating a cozy bookstore caf\u00e9. They have a fascination with culinary arts and a storied history in dance. A compassionate listener, Alex's vibrancy is contagious, and they find beauty in candid conversations.\nFriend intention_of_friend: Alex's goal is to encourage Person to explore and embrace their creative side. They believe creativity can be a therapeutic outlet and want to help Person find a passion to pursue.\nPerson name: Morgan\nPerson description: Morgan, a skeptical graphic designer from a small town in Italy, is 30 years old with a brave face masking their apprehension towards new relationships. A methodical thinker, they enjoy strategy games and have a bittersweet relationship with the fast-paced digital world.\n```
```Friend name: Casey\nFriend description: Casey is a compassionate nurse from a cozy Colorado mountain town, 32. They balance their intense career with a passion for rock climbing and a dedication to living sustainably. Casey values authenticity and never shies away from showing empathy to both patients and strangers alike.\nFriend intention_of_friend: Casey aims to guide Person toward embracing outdoor activities for their therapeutic benefits and to inspire Person to nurture a connection with nature.\nPerson name: Quinn\nPerson description: Quinn, a 35-year-old real estate agent from Miami, is known for a sharp business acumen and a no-nonsense attitude. Beneath this fa\u00e7ade, Quinn has a surprisingly deep appreciation for poetry and solitude, often reflecting on the ephemerality of success.\n```
|
Gigax/NPC-LLM-3_8B | Gigax | 2024-05-14T15:02:02Z | 69 | 24 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-28T19:38:52Z | ---
license: mit
language:
- en
---
# NPC Model
This repo contains the domain-specific NPC model we've fined-tuned from **Phi-3**, using LoRA.
This model parses a text description of a game scene, and outputs commands like:
* `say <player1> "Hello Adventurer, care to join me on a quest?`
* `greet <player1>`
* `attack <player1>`
* Any other `<action> <param>` you add to the prompt! (We call these "skills"!)
⚠️ This model has been trained to **overfit** on our input prompt format. Follow it closely to reach optimal performance ⚠️
## Usage
**Make your life easier, use our [Python client library](https://github.com/GigaxGames/gigax)**
* Instantiating the model using outlines:
```py
from outlines import models
from gigax.step import NPCStepper
# Download model from the Hub
model_name = "Gigax/NPC-LLM-3_8B"
llm = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Our stepper takes in a Outlines model to enable guided generation
# This forces the model to follow our output format
model = models.Transformers(llm, tokenizer)
# Instantiate a stepper: handles prompting + output parsing
stepper = NPCStepper(model=model)
```
* Calling the model on your game's data:
```py
from gigax.parse import CharacterAction
from gigax.scene import (
Character,
Item,
Location,
ProtagonistCharacter,
ProtagonistCharacter,
Skill,
ParameterType,
)
# Use sample data
context = "Medieval world"
current_location = Location(name="Old Town", description="A quiet and peaceful town.")
locations = [current_location] # you can add more locations to the scene
NPCs = [
Character(
name="John the Brave",
description="A fearless warrior",
current_location=current_location,
)
]
protagonist = ProtagonistCharacter(
name="Aldren",
description="Brave and curious",
current_location=current_location,
memories=["Saved the village", "Lost a friend"],
quests=["Find the ancient artifact", "Defeat the evil warlock"],
skills=[
Skill(
name="Attack",
description="Deliver a powerful blow",
parameter_types=[ParameterType.character],
)
],
psychological_profile="Determined and compassionate",
)
items = [Item(name="Sword", description="A sharp blade")]
events = [
CharacterAction(
command="Say",
protagonist=protagonist,
parameters=[items[0], "What a fine sword!"],
)
]
action = stepper.get_action(
context=context,
locations=locations,
NPCs=NPCs,
protagonist=protagonist,
items=items,
events=events,
)
```
## Input prompt
Here's a sample input prompt, showing you the format on which the model has been trained:
```txt
- WORLD KNOWLEDGE: A vast open world full of mystery and adventure.
- KNOWN LOCATIONS: Old Town
- NPCS: John the Brave
- CURRENT LOCATION: Old Town: A quiet and peaceful town.
- CURRENT LOCATION ITEMS: Sword
- LAST EVENTS:
Aldren: Say Sword What a fine sword!
- PROTAGONIST NAME: Aldren
- PROTAGONIST PSYCHOLOGICAL PROFILE: Brave and curious
- PROTAGONIST MEMORIES:
Saved the village
Lost a friend
- PROTAGONIST PENDING QUESTS:
Find the ancient artifact
Defeat the evil warlock
- PROTAGONIST ALLOWED ACTIONS:
Attack <character> : Deliver a powerful blow
Aldren:
```
### 🤗 We are currently working hard on training on the latest SoTA models (Phi-3, LLama, etc.), and on better data ! 🤗
## Model info
- **Developed by:** Gigax
- **Language(s) (NLP):** English
- **Finetuned from model [optional]:** [Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct)
- **Contact:** Join our [Discord](https://discord.gg/xES2Z8X4J6) for info, help, and more!
## How to Cite
```bibtex
@misc{NPC-LLM-3_8B,
url={[https://huggingface.co/Gigax/NPC-LLM-3_8B](https://huggingface.co/Gigax/NPC-LLM-3_8B)},
title={NPC-LLM-3_8B},
author={Gigax team}
}
```
|
farenassr/autotrain-autotrain-my-custom-diversity | farenassr | 2024-05-14T15:00:05Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T14:59:36Z | ---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
quocanh944/viT5-med-qa | quocanh944 | 2024-05-14T14:59:05Z | 163 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-14T14:57:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
blackhole33/uzbek-speaker-verification-v3 | blackhole33 | 2024-05-14T14:56:39Z | 2 | 0 | nemo | [
"nemo",
"pytorch",
"NeMo",
"license:cc-by-4.0",
"region:us"
] | null | 2024-05-14T13:36:12Z | ---
license: cc-by-4.0
library_name: nemo
tags:
- pytorch
- NeMo
---
# Uzbek-speaker-verification-v3
**Put a short model description here.**
## How to Use this Model
The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
### Automatically instantiate the model
**NOTE**: Please update the model class below to match the class of the model being uploaded.
```python
import nemo.core import ModelPT
model = ModelPT.from_pretrained("ai-nightcoder/uzbek-speaker-verification-v3")
```
### NOTE
Add some information about how to use the model here. An example is provided for ASR inference below.
### Transcribing using Python
First, let's get a sample
```
wget https://dldata-public.s3.us-east-2.amazonaws.com/2086-149220-0033.wav
```
Then simply do:
```
asr_model.transcribe(['2086-149220-0033.wav'])
```
### Transcribing many audio files
```shell
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py pretrained_name="ai-nightcoder/uzbek-speaker-verification-v3" audio_dir=""
```
### Input
**Add some information about what are the inputs to this model**
### Output
**Add some information about what are the outputs of this model**
## Model Architecture
**Add information here discussing architectural details of the model or any comments to users about the model.**
## Training
**Add information here about how the model was trained. It should be as detailed as possible, potentially including the the link to the script used to train as well as the base config used to train the model. If extraneous scripts are used to prepare the components of the model, please include them here.**
### NOTE
An example is provided below for ASR
The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_transducer/speech_to_text_rnnt_bpe.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/fastconformer/fast-conformer_transducer_bpe.yaml).
The tokenizers for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py).
### Datasets
**Try to provide as detailed a list of datasets as possible. If possible, provide links to the datasets on HF by adding it to the manifest section at the top of the README (marked by ---).**
### NOTE
An example for the manifest section is provided below for ASR datasets
datasets:
- librispeech_asr
- fisher_corpus
- Switchboard-1
- WSJ-0
- WSJ-1
- National-Singapore-Corpus-Part-1
- National-Singapore-Corpus-Part-6
- vctk
- voxpopuli
- europarl
- multilingual_librispeech
- mozilla-foundation/common_voice_8_0
- MLCommons/peoples_speech
The corresponding text in this section for those datasets is stated below -
The model was trained on 64K hours of English speech collected and prepared by NVIDIA NeMo and Suno teams.
The training dataset consists of private subset with 40K hours of English speech plus 24K hours from the following public datasets:
- Librispeech 960 hours of English speech
- Fisher Corpus
- Switchboard-1 Dataset
- WSJ-0 and WSJ-1
- National Speech Corpus (Part 1, Part 6)
- VCTK
- VoxPopuli (EN)
- Europarl-ASR (EN)
- Multilingual Librispeech (MLS EN) - 2,000 hour subset
- Mozilla Common Voice (v7.0)
- People's Speech - 12,000 hour subset
## Performance
**Add information here about the performance of the model. Discuss what is the metric that is being used to evaluate the model and if there are external links explaning the custom metric, please link to it.
### NOTE
An example is provided below for ASR metrics list that can be added to the top of the README
model-index:
- name: PUT_MODEL_NAME
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: AMI (Meetings test)
type: edinburghcstr/ami
config: ihm
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 17.10
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Earnings-22
type: revdotcom/earnings22
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 14.11
Provide any caveats about the results presented in the top of the discussion so that nuance is not lost.
It should ideally be in a tabular format (you can use the following website to make your tables in markdown format - https://www.tablesgenerator.com/markdown_tables)**
## Limitations
**Discuss any practical limitations to the model when being used in real world cases. They can also be legal disclaimers, or discussion regarding the safety of the model (particularly in the case of LLMs).**
### Note
An example is provided below
Since this model was trained on publicly available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.
|
NikolayKozloff/Phi-3-mini-4k-instruct-dansk-Q8_0-GGUF | NikolayKozloff | 2024-05-14T14:56:16Z | 7 | 1 | null | [
"gguf",
"trl",
"sft",
"generated_from_trainer",
"danish",
"llama-cpp",
"gguf-my-repo",
"dataset:kobprof/skolegpt-instruct",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:quantized:microsoft/Phi-3-mini-4k-instruct",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-14T14:56:03Z | ---
license: mit
tags:
- trl
- sft
- generated_from_trainer
- danish
- llama-cpp
- gguf-my-repo
base_model: microsoft/Phi-3-mini-4k-instruct
datasets:
- kobprof/skolegpt-instruct
model-index:
- name: Phi-3-mini-4k-instruct-dansk
results: []
---
# NikolayKozloff/Phi-3-mini-4k-instruct-dansk-Q8_0-GGUF
This model was converted to GGUF format from [`emillykkejensen/Phi-3-mini-4k-instruct-dansk`](https://huggingface.co/emillykkejensen/Phi-3-mini-4k-instruct-dansk) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/emillykkejensen/Phi-3-mini-4k-instruct-dansk) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo NikolayKozloff/Phi-3-mini-4k-instruct-dansk-Q8_0-GGUF --model phi-3-mini-4k-instruct-dansk.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo NikolayKozloff/Phi-3-mini-4k-instruct-dansk-Q8_0-GGUF --model phi-3-mini-4k-instruct-dansk.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m phi-3-mini-4k-instruct-dansk.Q8_0.gguf -n 128
```
|
kyl23/hw3_SST2_bitfit_1e-5 | kyl23 | 2024-05-14T14:55:25Z | 180 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-14T14:54:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TSingye/DYG_DistillGPT-2 | TSingye | 2024-05-14T14:52:45Z | 145 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T14:48:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
terry69/mistral_poe_nores | terry69 | 2024-05-14T14:49:13Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"mistral",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"dataset:HuggingFaceH4/ultrachat_200k",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-05-14T13:09:29Z | ---
license: apache-2.0
library_name: peft
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
datasets:
- HuggingFaceH4/ultrachat_200k
model-index:
- name: mistral_poe_nores
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_poe_nores
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the HuggingFaceH4/ultrachat_200k dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0 | 1.0 | 325 | nan |
### Framework versions
- PEFT 0.7.1
- Transformers 4.39.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2 |
emilykang/Gemma_finetune_med | emilykang | 2024-05-14T14:48:35Z | 7 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"license:gemma",
"region:us"
] | null | 2024-05-14T12:23:43Z | ---
license: gemma
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: google/gemma-2b
datasets:
- generator
model-index:
- name: Gemma_finetune_med
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Gemma_finetune_med
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 10
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.0.1+cu117
- Datasets 2.19.0
- Tokenizers 0.19.1 |
Recaru/Llama-3-KoEn-8B-Instruct-preview-Q5_K_M-GGUF | Recaru | 2024-05-14T14:48:20Z | 2 | 0 | null | [
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"llama-3-ko",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"ko",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-05-14T14:47:59Z | ---
language:
- en
- ko
license: cc-by-nc-sa-4.0
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- llama-3-ko
- llama-cpp
- gguf-my-repo
pipeline_tag: text-generation
license_name: llama3
license_link: LICENSE
---
# Recaru/Llama-3-KoEn-8B-Instruct-preview-Q5_K_M-GGUF
This model was converted to GGUF format from [`beomi/Llama-3-KoEn-8B-Instruct-preview`](https://huggingface.co/beomi/Llama-3-KoEn-8B-Instruct-preview) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/beomi/Llama-3-KoEn-8B-Instruct-preview) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo Recaru/Llama-3-KoEn-8B-Instruct-preview-Q5_K_M-GGUF --model llama-3-koen-8b-instruct-preview.Q5_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo Recaru/Llama-3-KoEn-8B-Instruct-preview-Q5_K_M-GGUF --model llama-3-koen-8b-instruct-preview.Q5_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-3-koen-8b-instruct-preview.Q5_K_M.gguf -n 128
```
|
Litzy619/G0513HMAB2 | Litzy619 | 2024-05-14T14:42:52Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:google/gemma-2b",
"base_model:finetune:google/gemma-2b",
"license:gemma",
"region:us"
] | null | 2024-05-14T08:42:47Z | ---
license: gemma
base_model: google/gemma-2b
tags:
- generated_from_trainer
model-index:
- name: G0513HMAB2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# G0513HMAB2
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1364
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 80
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9285 | 0.09 | 10 | 1.9193 |
| 1.9268 | 0.18 | 20 | 1.9150 |
| 1.9047 | 0.27 | 30 | 1.8833 |
| 1.8501 | 0.36 | 40 | 1.7905 |
| 1.7172 | 0.45 | 50 | 1.6083 |
| 1.4992 | 0.54 | 60 | 1.3297 |
| 1.1821 | 0.63 | 70 | 0.9550 |
| 0.748 | 0.73 | 80 | 0.5145 |
| 0.3913 | 0.82 | 90 | 0.2609 |
| 0.2021 | 0.91 | 100 | 0.1661 |
| 0.1594 | 1.0 | 110 | 0.1513 |
| 0.1462 | 1.09 | 120 | 0.1484 |
| 0.1441 | 1.18 | 130 | 0.1473 |
| 0.1453 | 1.27 | 140 | 0.1458 |
| 0.1485 | 1.36 | 150 | 0.1448 |
| 0.1407 | 1.45 | 160 | 0.1455 |
| 0.1417 | 1.54 | 170 | 0.1428 |
| 0.1421 | 1.63 | 180 | 0.1416 |
| 0.1428 | 1.72 | 190 | 0.1438 |
| 0.1398 | 1.81 | 200 | 0.1403 |
| 0.1399 | 1.9 | 210 | 0.1392 |
| 0.141 | 1.99 | 220 | 0.1394 |
| 0.1377 | 2.08 | 230 | 0.1379 |
| 0.1363 | 2.18 | 240 | 0.1374 |
| 0.1352 | 2.27 | 250 | 0.1375 |
| 0.1394 | 2.36 | 260 | 0.1375 |
| 0.1362 | 2.45 | 270 | 0.1373 |
| 0.1324 | 2.54 | 280 | 0.1369 |
| 0.1317 | 2.63 | 290 | 0.1367 |
| 0.133 | 2.72 | 300 | 0.1365 |
| 0.1341 | 2.81 | 310 | 0.1364 |
| 0.1346 | 2.9 | 320 | 0.1364 |
| 0.1365 | 2.99 | 330 | 0.1364 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.0
|
QinLiuNLP/mistral-poe-10p-detach | QinLiuNLP | 2024-05-14T14:31:09Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"mistral",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"dataset:HuggingFaceH4/ultrachat_200k",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-05-14T08:34:11Z | ---
license: apache-2.0
library_name: peft
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
datasets:
- HuggingFaceH4/ultrachat_200k
model-index:
- name: mistral-poe-10p-detach
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-poe-10p-detach
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the HuggingFaceH4/ultrachat_200k dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 16
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7345 | 1.0 | 3898 | nan |
### Framework versions
- PEFT 0.7.1
- Transformers 4.39.0.dev0
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2 |
AhmetAytar/all-mpnet-base-v2-fine-tuned_5_textbook_grobid | AhmetAytar | 2024-05-14T14:30:17Z | 8 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-05-14T14:26:27Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# AhmetAytar/all-mpnet-base-v2-fine-tuned_5_textbook_grobid
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('AhmetAytar/all-mpnet-base-v2-fine-tuned_5_textbook_grobid')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=AhmetAytar/all-mpnet-base-v2-fine-tuned_5_textbook_grobid)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 160 with parameters:
```
{'batch_size': 10, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 50,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 32,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
cimphony-ai-admin/Cimphony-Mistral-Law-7B | cimphony-ai-admin | 2024-05-14T14:28:04Z | 30 | 3 | peft | [
"peft",
"safetensors",
"mistral",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"text-generation",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"model-index",
"region:us"
] | text-generation | 2024-05-10T18:58:33Z | ---
license: apache-2.0
library_name: peft
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: Cimphony-Mistral-Law-7B
results:
- task:
type: text-generation
dataset:
type: cais/mmlu
name: MMLU
metrics:
- name: International Law
type: accuracy
value: 0.802
verified: false
- task:
type: text-generation
dataset:
type: cais/mmlu
name: MMLU
metrics:
- name: Jurisprudence
type: accuracy
value: 0.704
verified: false
- task:
type: text-generation
dataset:
type: cais/mmlu
name: MMLU
metrics:
- name: Professional Law
type: accuracy
value: 0.416
verified: false
- task:
type: text-generation
dataset:
type: coastalcph/lex_glue
name: LexGLUE
metrics:
- name: ECtHR A
type: balanced accuracy
value: 0.631
verified: false
- task:
type: text-generation
dataset:
type: coastalcph/lex_glue
name: LexGLUE
metrics:
- name: LEDGAR
type: balanced accuracy
value: 0.741
verified: false
- task:
type: text-generation
dataset:
type: coastalcph/lex_glue
name: LexGLUE
metrics:
- name: CaseHOLD
type: accuracy
value: 0.776
verified: false
- task:
type: text-generation
dataset:
type: coastalcph/lex_glue
name: LexGLUE
metrics:
- name: Unfair-ToS
type: balanced accuracy
value: 0.809
verified: false
pipeline_tag: text-generation
---
# Cimphony-Mistral-Law-7B
We introduce Cimphony-Mistral-Law-7B, a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1).
Cimphony’s LLMs present state-of-the-art performance on legal benchmarks, suppressing models trained on a much larger corpus with significantly more resources, even GPT-4, OpenAI’s flagship model.
Checkout and register on our [https://cimphony.ai](https://app.cimphony.ai/signup?callbackUrl=https://app.cimphony.ai/)

## Model description
The model was trained on 600M tokens. We use novel methods to expose the model to this corpus during training, blending a variety of legal reading comprehension tasks, as well as general language data.
## Legal Evaluation Results
We evaluate on the legal splits of the MMLU benchmark, as well as LexGLUE. While both are multiple option benchmarks, prompts were adapted so that the models output a single answer. In some cases, additional post-processing was required.
Benchmarks for which the labels were A-E multiple-choice options use an accuracy mertic. Benchmarks that have a closed list of options (e.g. Unfair-ToS) use a balanced-accuracy metric, as classes may not be balanced.
| Model / Benchmark | International Law (MMLU) | Jurisprudence (MMLU) | Professional law (MMLU) | ECtHR A (LexGlue) | LEDGAR (LexGlue) | CaseHOLD (LexGlue) | Unfair-ToS (LexGlue) |
|:-----------------------------------|:--------------------------|:----------------------|:-------------------------|:-------------------|:------------------|:--------------------|:-----------------------|
| Mistral-7B-Instruct-v0.2 | 73.6% | 69.4% | 41.2% | 67.5% | 50.6% | 56.3% | 36.6% |
| AdaptLLM | 57.0% | 52.8% | 36.1% | 51.9% | 46.3% | 50.0% | 51.3% |
| Saul-7B | 69.4% | 63.0% | **43.2%** | **71.2%** | 55.9% | 65.8% | 80.3% |
|<tr style="background-color:yellow;"><td>Cimphony-7B</td><td>**80.2%**</td><td>**70.4%**</td><td>41.6%</td><td>63.1%</td><td>**74.1%**</td><td>**77.6%**</td><td>**80.9%**</td></tr>|
## Training and evaluation data
Following the framework presented in [AdaptLLM](https://huggingface.co/AdaptLLM/law-chat), we convert the raw legal text into reading comprehension. Taking inspiration from human learning via reading comprehension - practice after reading improves the ability to answer questions based on the learned knowledge.
We developed a high-quality prompt database, considering the capabilities we’d like the model to possess. LLMs were prompt with the raw text and a collection of prompts, and it returned answers, additional questions, and transformations relevant to the input data. With further post-processing of these outputs, we created our legal reading comprehension dataset.
| Domain | Dataset | Tokens | License |
|:-------------------|:--------------------|:------:|:------------|
| Legal | The Pile (FreeLaw) | 180M | MIT |
| Legal | LexGlue (train split only) | 108M | CC-BY-4.0 |
| Legal | USClassActions | 12M | GPL-3.0 |
| Math (CoT) | AQUA-RAT | 3M | Apache-2.0 |
| Commonsense (CoT) | ECQA | 2.4M | Apache-2.0 |
| Reasoning (CoT) | EntailmentBank | 1.8M | Apache-2.0 |
| Chat | UltraChat | 90M | MIT |
| Code | Code-Feedback | 36M | Apache-2.0 |
| Instruction | OpenOrca | 180M | MIT |
## Intended uses & limitations
This model can be used for use cases involving legal domain text generation.
As with any language model, users must not solely relay on model generations. This model has not gone through a human-feedback alignment (RLHF). The model may generate responses containing hallucinations and biases.
Example use:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
tokenizer = AutoTokenizer.from_pretrained("cimphonyadmin/Cimphony-Mistral-Law-7B")
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1")
model = PeftModel.from_pretrained(model, "cimphonyadmin/Cimphony-Mistral-Law-7B")
# Put your input here:
user_input = '''What can you tell me about ex post facto laws?'''
# Apply the prompt template
prompt = tokenizer.apply_chat_template(user_input, tokenize=False)
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).input_ids.to(model.device)
outputs = model.generate(input_ids=inputs, max_length=4096)[0]
answer_start = int(inputs.shape[-1])
pred = tokenizer.decode(outputs[answer_start:], skip_special_tokens=True)
print(f'### User Input:\n{user_input}\n\n### Assistant Output:\n{pred}')
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 24
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 96
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2 |
crrodrvi/First_Order_Motion | crrodrvi | 2024-05-14T14:27:37Z | 0 | 0 | null | [
"arxiv:2104.11280",
"region:us"
] | null | 2024-05-11T21:47:57Z | <b>!!! Check out our new [paper](https://arxiv.org/pdf/2104.11280.pdf) and [framework](https://github.com/snap-research/articulated-animation) improved for articulated objects</b>
# First Order Motion Model for Image Animation
This repository contains the source code for the paper [First Order Motion Model for Image Animation](https://papers.nips.cc/paper/8935-first-order-motion-model-for-image-animation) by Aliaksandr Siarohin, [Stéphane Lathuilière](http://stelat.eu), [Sergey Tulyakov](http://stulyakov.com), [Elisa Ricci](http://elisaricci.eu/) and [Nicu Sebe](http://disi.unitn.it/~sebe/).
[Hugging Face Spaces](https://huggingface.co/spaces/abhishek/first-order-motion-model)
## Example animations
The videos on the left show the driving videos. The first row on the right for each dataset shows the source videos. The bottom row contains the animated sequences with motion transferred from the driving video and object taken from the source image. We trained a separate network for each task.
### VoxCeleb Dataset

### Fashion Dataset

### MGIF Dataset

### Installation
We support ```python3```. To install the dependencies run:
```
pip install -r requirements.txt
```
### YAML configs
There are several configuration (```config/dataset_name.yaml```) files one for each `dataset`. See ```config/taichi-256.yaml``` to get description of each parameter.
### Pre-trained checkpoint
Checkpoints can be found under following link: [google-drive](https://drive.google.com/open?id=1PyQJmkdCsAkOYwUyaj_l-l0as-iLDgeH) or [yandex-disk](https://yadi.sk/d/lEw8uRm140L_eQ).
### Animation Demo
To run a demo, download checkpoint and run the following command:
```
python demo.py --config config/dataset_name.yaml --driving_video path/to/driving --source_image path/to/source --checkpoint path/to/checkpoint --relative --adapt_scale
```
The result will be stored in ```result.mp4```.
The driving videos and source images should be cropped before it can be used in our method. To obtain some semi-automatic crop suggestions you can use ```python crop-video.py --inp some_youtube_video.mp4```. It will generate commands for crops using ffmpeg. In order to use the script, face-alligment library is needed:
```
git clone https://github.com/1adrianb/face-alignment
cd face-alignment
pip install -r requirements.txt
python setup.py install
```
### Animation demo with Docker
If you are having trouble getting the demo to work because of library compatibility issues,
and you're running Linux, you might try running it inside a Docker container, which would
give you better control over the execution environment.
Requirements: Docker 19.03+ and [nvidia-docker](https://github.com/NVIDIA/nvidia-docker)
installed and able to successfully run the `nvidia-docker` usage tests.
We'll first build the container.
```
docker build -t first-order-model .
```
And now that we have the container available locally, we can use it to run the demo.
```
docker run -it --rm --gpus all \
-v $HOME/first-order-model:/app first-order-model \
python3 demo.py --config config/vox-256.yaml \
--driving_video driving.mp4 \
--source_image source.png \
--checkpoint vox-cpk.pth.tar \
--result_video result.mp4 \
--relative --adapt_scale
```
### Colab Demo
[](https://colab.research.google.com/github/AliaksandrSiarohin/first-order-model/blob/master/demo.ipynb) [](https://kaggle.com/kernels/welcome?src=https://github.com/AliaksandrSiarohin/first-order-model/blob/master/demo.ipynb)
@graphemecluster prepared a GUI demo for the Google Colab. It also works in Kaggle. For the source code, see [```demo.ipynb```](https://github.com/AliaksandrSiarohin/first-order-model/blob/master/demo.ipynb).
For the old demo, see [```old_demo.ipynb```](https://github.com/AliaksandrSiarohin/first-order-model/blob/master/old_demo.ipynb).
### Face-swap
It is possible to modify the method to perform face-swap using supervised segmentation masks.

For both unsupervised and supervised video editing, such as face-swap, please refer to [Motion Co-Segmentation](https://github.com/AliaksandrSiarohin/motion-cosegmentation).
### Training
To train a model on specific dataset run:
```
CUDA_VISIBLE_DEVICES=0,1,2,3 python run.py --config config/dataset_name.yaml --device_ids 0,1,2,3
```
The code will create a folder in the log directory (each run will create a time-stamped new directory).
Checkpoints will be saved to this folder.
To check the loss values during training see ```log.txt```.
You can also check training data reconstructions in the ```train-vis``` subfolder.
By default the batch size is tunned to run on 2 or 4 Titan-X gpu (appart from speed it does not make much difference). You can change the batch size in the train_params in corresponding ```.yaml``` file.
### Evaluation on video reconstruction
To evaluate the reconstruction performance run:
```
CUDA_VISIBLE_DEVICES=0 python run.py --config config/dataset_name.yaml --mode reconstruction --checkpoint path/to/checkpoint
```
You will need to specify the path to the checkpoint,
the ```reconstruction``` subfolder will be created in the checkpoint folder.
The generated video will be stored to this folder, also generated videos will be stored in ```png``` subfolder in loss-less '.png' format for evaluation.
Instructions for computing metrics from the paper can be found: https://github.com/AliaksandrSiarohin/pose-evaluation.
### Image animation
In order to animate videos run:
```
CUDA_VISIBLE_DEVICES=0 python run.py --config config/dataset_name.yaml --mode animate --checkpoint path/to/checkpoint
```
You will need to specify the path to the checkpoint,
the ```animation``` subfolder will be created in the same folder as the checkpoint.
You can find the generated video there and its loss-less version in the ```png``` subfolder.
By default video from test set will be randomly paired, but you can specify the "source,driving" pairs in the corresponding ```.csv``` files. The path to this file should be specified in corresponding ```.yaml``` file in pairs_list setting.
There are 2 different ways of performing animation:
by using **absolute** keypoint locations or by using **relative** keypoint locations.
1) <i>Animation using absolute coordinates:</i> the animation is performed using the absolute postions of the driving video and appearance of the source image.
In this way there are no specific requirements for the driving video and source appearance that is used.
However this usually leads to poor performance since unrelevant details such as shape is transfered.
Check animate parameters in ```taichi-256.yaml``` to enable this mode.
<img src="sup-mat/absolute-demo.gif" width="512">
2) <i>Animation using relative coordinates:</i> from the driving video we first estimate the relative movement of each keypoint,
then we add this movement to the absolute position of keypoints in the source image.
This keypoint along with source image is used for animation. This usually leads to better performance, however this requires
that the object in the first frame of the video and in the source image have the same pose
<img src="sup-mat/relative-demo.gif" width="512">
### Datasets
1) **Bair**. This dataset can be directly [downloaded](https://yadi.sk/d/Rr-fjn-PdmmqeA).
2) **Mgif**. This dataset can be directly [downloaded](https://yadi.sk/d/5VdqLARizmnj3Q).
3) **Fashion**. Follow the instruction on dataset downloading [from](https://vision.cs.ubc.ca/datasets/fashion/).
4) **Taichi**. Follow the instructions in [data/taichi-loading](data/taichi-loading/README.md) or instructions from https://github.com/AliaksandrSiarohin/video-preprocessing.
5) **Nemo**. Please follow the [instructions](https://www.uva-nemo.org/) on how to download the dataset. Then the dataset should be preprocessed using scripts from https://github.com/AliaksandrSiarohin/video-preprocessing.
6) **VoxCeleb**. Please follow the instruction from https://github.com/AliaksandrSiarohin/video-preprocessing.
### Training on your own dataset
1) Resize all the videos to the same size e.g 256x256, the videos can be in '.gif', '.mp4' or folder with images.
We recommend the later, for each video make a separate folder with all the frames in '.png' format. This format is loss-less, and it has better i/o performance.
2) Create a folder ```data/dataset_name``` with 2 subfolders ```train``` and ```test```, put training videos in the ```train``` and testing in the ```test```.
3) Create a config ```config/dataset_name.yaml```, in dataset_params specify the root dir the ```root_dir: data/dataset_name```. Also adjust the number of epoch in train_params.
#### Additional notes
Citation:
```
@InProceedings{Siarohin_2019_NeurIPS,
author={Siarohin, Aliaksandr and Lathuilière, Stéphane and Tulyakov, Sergey and Ricci, Elisa and Sebe, Nicu},
title={First Order Motion Model for Image Animation},
booktitle = {Conference on Neural Information Processing Systems (NeurIPS)},
month = {December},
year = {2019}
}
```
|
eventdata-utd/conflibert-satp-binary-classification | eventdata-utd | 2024-05-14T14:26:47Z | 120 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-05-14T06:41:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Danieljacobsen/Helsinki-DA-SV-v6 | Danieljacobsen | 2024-05-14T14:25:37Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-14T11:24:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
NikolayKozloff/Phi3-ITA-mini-4K-instruct-Q8_0-GGUF | NikolayKozloff | 2024-05-14T14:24:43Z | 2 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"trl",
"sft",
"phi-3",
"phi-3-mini",
"italian",
"llama-cpp",
"gguf-my-repo",
"it",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:quantized:microsoft/Phi-3-mini-4k-instruct",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-14T14:24:32Z | ---
language:
- it
license: mit
tags:
- text-generation-inference
- transformers
- trl
- sft
- phi-3
- phi-3-mini
- italian
- llama-cpp
- gguf-my-repo
base_model: microsoft/Phi-3-mini-4k-instruct
---
# NikolayKozloff/Phi3-ITA-mini-4K-instruct-Q8_0-GGUF
This model was converted to GGUF format from [`e-palmisano/Phi3-ITA-mini-4K-instruct`](https://huggingface.co/e-palmisano/Phi3-ITA-mini-4K-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/e-palmisano/Phi3-ITA-mini-4K-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo NikolayKozloff/Phi3-ITA-mini-4K-instruct-Q8_0-GGUF --model phi3-ita-mini-4k-instruct.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo NikolayKozloff/Phi3-ITA-mini-4K-instruct-Q8_0-GGUF --model phi3-ita-mini-4k-instruct.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m phi3-ita-mini-4k-instruct.Q8_0.gguf -n 128
```
|
stablediffusionapi/realistic-vision-v6.0-b1-inpaint | stablediffusionapi | 2024-05-14T14:20:11Z | 965 | 2 | diffusers | [
"diffusers",
"safetensors",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-04-25T12:51:24Z | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "realistic-vision-v6.0-b1-inpaint"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/realistic-vision-v6.0-b1-inpaint)
Model link: [View model](https://modelslab.com/models/realistic-vision-v6.0-b1-inpaint)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "realistic-vision-v6.0-b1-inpaint",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
SABR22/unsloth-llama-3-8b-sql | SABR22 | 2024-05-14T14:19:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-14T14:19:26Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** SABR22
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Ankesh1234/gemma-medical_qa-Finetune | Ankesh1234 | 2024-05-14T14:18:05Z | 142 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T14:11:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gutsartificial/hermes-2-pro-llama3-entity-mapping | gutsartificial | 2024-05-14T14:17:11Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:NousResearch/Hermes-2-Pro-Llama-3-8B",
"base_model:finetune:NousResearch/Hermes-2-Pro-Llama-3-8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T13:47:49Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: NousResearch/Hermes-2-Pro-Llama-3-8B
---
# Uploaded model
- **Developed by:** gutsartificial
- **License:** apache-2.0
- **Finetuned from model :** NousResearch/Hermes-2-Pro-Llama-3-8B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Huseyin/checkpoint-1000 | Huseyin | 2024-05-14T14:16:08Z | 16 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"tr",
"dataset:mozilla-foundation/common_voice_17_0",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-14T14:09:18Z | ---
language:
- tr
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_17_0
metrics:
- wer
model-index:
- name: Whisper Medium Tr - Huseyin Ates
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 17.0
type: mozilla-foundation/common_voice_17_0
config: tr
split: test
args: 'config: tr, split: test'
metrics:
- name: Wer
type: wer
value: 19.615089840756195
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Medium Tr - Huseyin Ates
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Common Voice 17.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2422
- Wer: 19.6151
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.1504 | 0.1724 | 1000 | 0.2422 | 19.6151 |
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
kyl23/hw3_RTE_lora_1e-4_r4 | kyl23 | 2024-05-14T14:15:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-14T14:15:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ksaito2omr/synth_doc_model | ksaito2omr | 2024-05-14T14:10:50Z | 49 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-05-14T14:05:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Tonnytempus/tempusdonum | Tonnytempus | 2024-05-14T14:09:17Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T14:02:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ZcepZtar/DaToSw_V1 | ZcepZtar | 2024-05-14T14:06:27Z | 114 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-14T14:06:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
EthanRhys/Dr-Crygor-Current | EthanRhys | 2024-05-14T14:05:51Z | 0 | 0 | null | [
"license:openrail++",
"region:us"
] | null | 2024-05-14T14:05:04Z | ---
license: openrail++
---
|
Subsets and Splits